Presumably, the program would have the potential to understand things. If
it only used text, then a good AGI program would have to have the potential
to acquire a lot of knowledge about a lot of different things using words.
The problem is that because of the ambiguity of words and the ambiguity of
the use of words, it would have a lot of trouble interpreting the meaning
for a sentence. The different ways you can speak of your fondness for
apples demonstrates a little bit about the extent of this knowledge. My
theory is that we should use this range of knowledge in determining the
meaning of words. Since this is the potential, the problem and
computationally feasible I think it might work. I consider this method to
be a way to form good judgement.
I am really thinking of a general AI program, not just a translator. But, I
think that this method is relevant. For example, all your sentences about
your fondness of apples reveal something about the subject and those
sentences could subsequently used for a variety of situations that use some
of the same words - so long as the program had some good way to start
learning about sentences.
By exploring the different ways people talk about things that they like, by
making distinctions between different kinds of likable things, and by
integrating this knowledge with other kinds of knowledge, I believe that a
program could choose good translations for the particular situation that it
is in.
Jim Bromer
On Fri, Apr 15, 2011 at 9:03 AM, Daniel <daniel.burke@yahoo.com> wrote:
>
>
> I like the create and test idea.
>
> If a system created say 5 possibles sentence as appropriate for a
> translation. How would it test each one. There would need to be a scoring
> system of some sort. Since language can be infinitely complex and extendable
> the number of possible ways to say the same thing could be huge.
>
> E.G
> I like apples
> I am fond of Apples
> Apples are nice
> I love Apples
> Apples are fabulous
> Apples are so sweet
> I am in love with the Apple
> I have an Apple passion
> My passion is Apples
>
> Etc. Etc. And this is for a three word base sentence. Can you imagine the
> different ways to say a longer sentence?
>
> The list goes on and on, how do you tell which one is "best"?
>
> Dan
>
>
> --- In artificialintelligencegroup@yahoogroups.com, "bromer2007"
> <jimbromer@...> wrote:
> >
> > > So how do we make sentences in the first place? Any ideas anyone? I
> have a few but wondered what others felt.
> > >
> > > Dan
> >
> > Of course we use memories of events as well as memories of how to use
> words and sentences. However, since we have to comprehend ongoing events in
> the terms of past events it seems clear that the methods we use to
> comprehend what is going on are built from some kind of compounds of
> generalities. Although using language is different than general
> conceptualization, I believe that there are many similarities.
> >
> > So this means, for example, that I don't think iconic grounding is
> necessary (absolutely necessary) for higher intelligence, and so far, there
> is no evidence suggesting that it is.
> >
> > The problem as I see it is just one of complexity. If a computer program
> could examine many different possible expressions that might be used to
> describe a situation it might be able, after a lot of learning, to decide
> which one is most appropriate. But because of the problem of referential
> ambiguity the number of possible combinations of meanings currently increase
> at a rate that is nearly intractable as the amount of knowledge learned
> increases.
> >
> > If this were not the case, it would be easy to test different strategies.
> >
> > My basic strategy would be for the program to create a possible sentence
> and then examine it using a variety of analytical methods that are related
> to the subject (the subjects) of the sentence. Since we are talking about a
> situation where words, word-phrases and sentences may take on different
> meanings, this method of testing an expression makes a lot of sense. In
> other words, the program is not testing every possible interpretation of a
> sentence, but it does have to examine a great many of them.
> >
> > Computers work well with mathematical problems in which a narrow
> resultant of a sequence of computations is then used as the input of the
> next step in a problem that can be eventually solved with a narrow solution.
> (A narrow solution is a solution with a feasible number of precise correct
> evaluations.) General AI (or AGI) does not seem to reduce to systems of
> problems that all have narrowly correct values. If it did, it would be easy
> to test different strategies.
> >
> > Jim Bromer
> >
> >
> > --- In artificialintelligencegroup@yahoogroups.com, "Daniel"
> <daniel.burke@> wrote:
> > >
> > > In order to translate from one language to another obviously the person
> needs to know both languages well. It started me think about how language is
> stored in your head and how you generate sentences in the first place.
> > > Seems to me the translation process works like this.
> > >
> > > Take source material, read it through and make sure you understand the
> MEANING of all the sentence.
> > >
> > > Then in the target language make sentence that have the same meaning.
> > >
> > > This is why machine translation is so hard since word that individually
> mean the same may not mean the same when in bigger language chunks such as
> phrases or sayings or local terminology.
> > >
> > > How do we make sure the meaning is the same in the source and target
> language.
> > > And finally how do we ensure the correct word order in the target
> language?
> > >
> > > So how do we make sentences in the first place? Any ideas anyone? I
> have a few but wondered what others felt.
> > >
> > > Dan
> > >
> >
>
>
>
[Non-text portions of this message have been removed]
Sinema
Blog Archive
- April 2024 (1)
- August 2023 (1)
- February 2023 (3)
- November 2021 (1)
- September 2021 (1)
- August 2021 (1)
- July 2021 (1)
- April 2021 (1)
- December 2020 (1)
- November 2020 (1)
- August 2020 (2)
- February 2020 (1)
- January 2020 (1)
- December 2019 (1)
- November 2019 (1)
- November 2018 (4)
- October 2018 (3)
- September 2018 (3)
- August 2018 (1)
- July 2018 (5)
- June 2018 (3)
- May 2018 (1)
- April 2018 (5)
- March 2018 (5)
- February 2018 (7)
- January 2018 (7)
- December 2017 (3)
- November 2017 (10)
- October 2017 (7)
- September 2017 (7)
- August 2017 (13)
- July 2017 (5)
- June 2017 (5)
- May 2017 (6)
- April 2017 (11)
- March 2017 (14)
- February 2017 (10)
- January 2017 (6)
- December 2016 (6)
- November 2016 (14)
- October 2016 (13)
- September 2016 (10)
- August 2016 (4)
- July 2016 (10)
- June 2016 (9)
- May 2016 (13)
- April 2016 (10)
- March 2016 (14)
- February 2016 (14)
- January 2016 (9)
- December 2015 (5)
- November 2015 (19)
- October 2015 (10)
- September 2015 (15)
- August 2015 (9)
- July 2015 (9)
- June 2015 (7)
- May 2015 (12)
- April 2015 (19)
- March 2015 (20)
- February 2015 (16)
- January 2015 (6)
- December 2014 (9)
- November 2014 (6)
- October 2014 (11)
- September 2014 (18)
- August 2014 (6)
- July 2014 (13)
- June 2014 (11)
- May 2014 (9)
- April 2014 (17)
- March 2014 (10)
- February 2014 (13)
- January 2014 (10)
- December 2013 (9)
- November 2013 (16)
- October 2013 (17)
- September 2013 (13)
- August 2013 (8)
- July 2013 (8)
- June 2013 (11)
- May 2013 (11)
- April 2013 (20)
- March 2013 (9)
- February 2013 (10)
- January 2013 (14)
- December 2012 (9)
- November 2012 (10)
- October 2012 (8)
- September 2012 (16)
- August 2012 (47)
- July 2012 (13)
- June 2012 (11)
- May 2012 (21)
- April 2012 (106)
- March 2012 (18)
- February 2012 (10)
- January 2012 (15)
- December 2011 (4)
- November 2011 (40)
- October 2011 (34)
- September 2011 (48)
- August 2011 (38)
- July 2011 (38)
- June 2011 (59)
- May 2011 (63)
- April 2011 (86)
- March 2011 (40)
- February 2011 (36)
- January 2011 (36)
- December 2010 (43)
- November 2010 (43)
- October 2010 (60)
- September 2010 (58)
- August 2010 (77)
- July 2010 (71)
- June 2010 (41)
- May 2010 (32)
- April 2010 (45)
- March 2010 (56)
- February 2010 (57)
- January 2010 (57)
- December 2009 (57)
- November 2009 (58)
- October 2009 (59)
- September 2009 (40)
- August 2009 (61)
- July 2009 (59)
- June 2009 (63)
- May 2009 (63)
- April 2009 (86)
- March 2009 (13)
İnternet
Sunday, April 17, 2011
Re: [Artificial Intelligence Group] Re: Machine translation and AI
News. Design: Vida de bombeiro
Vida de bombeiro
Recipes
Informatica
Humor
Jokes
Mensagens
Curiosity
Saude
Video Games
Car Blog
Animals
Diario das Mensagens
Eletronica
Rei Jesus
News
Noticias da TV
Artesanato
Esportes
Noticias Atuais
Games
Pets
Career
Religion
Recreation
Business
Education
Autos
Academics
Style
Television
Programming
Motosport
Humor News
The Games
Home
Downs
World News
Internet
Car
Design
Entertaimment
Celebrities
1001 Games
Doctor Pets
Net Downs
World Enter
Jesus
Variedade
Mensagensr
Android
Rub
Letras
Dialogue
cosmetics
Genexus
Car net
Só Humor
Curiosity
Gifs
Medical
Female
American
Health
Madeira
Designer
PPS
Divertidas
Estate
Travel
Estate
Writing
Computer
Matilde
Ocultos
Matilde
futebolcomnoticias
girassol
lettheworldturn
topdigitalnet
Bem amado
enjohnny
produceideas
foodasticos
cronicasdoimaginario
downloadsdegraca
compactandoletras
newcuriosidades
blogdoarmario
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.