RSS

Category Archives: Science Fact

Machine “Translation” and What Words Mean in Context

One of the biggest commonly known flaws of mahcine translation is a computer’s inability to understand differing meaning in context.  After all, a machine doesn’t know what a “horse” is.  It knows that “caballo” has (roughly) the same meaning in Spanish as “horse” does in English.  But it doesn’t know what that meaning is.

And it certainly doesn’t know what it means when we say that someone has a “horse-face”(/”face like a horse”).

 

But humans can misunderstand meaning in context, too.  For example, if you don’t know how “machine translation” works, you’d think that machines could actually translate or produce translations.  You would be wrong.  What a human does to produce a translation is not the same as what a machine does to produce a “translation”.  That’s why machine and human translators make different mistakes when trying to render the original meaning in the new language.

 

A human brain converts words from the source language into meaning and the meaning back into words in the target language.  A computer converts words from the source language directly to words in the target language, creating a so-called “literal” translation.  A computer would suck at translating a novel, because the figures of speech that make prose (or poetry) what they are are incomprehensible to a machine.  Machine translation programs lack the deeply associated(inter-connected) knowledge base that humans use when producing and interpreting language.

 

A more realistic machine translation(MT) program would require an information web with connections between concepts, rather than words, such that the concept of horse would be related to the concepts of leg, mane, tail, rider, etc, without any intervening linguistic connection.

Imagine a net of concepts represented as data objects.  These are connected to each other in an enormously complex web.  Then, separately, you have a net of linguistic objects, such as words and grammatical patterns, which are overlaid on the concept net, and interconnected.  The objects representing the words for “horse” and “mane” would not have a connection, but the objects representing the concept of meaning underlying these words would have, perhaps, a “has-a” connection, also represented by a connection or “association” object.

In order to translate between languages like a human would, you need your program to have an approximation of human understanding.  A famous study suggested that in the brain of a human who knows about Lindsay Lohan, there’s an actual “Lindsay” neuron, which lights up whenever you think about Lindsay Lohan.  It’s probably lighting up right now as you read this post.  Similarly, in our theoretical machine translation program information “database”, you have a “horse” “neuron” represented by our concept object concept that I described above.  It’s separate from our linguistic object neuron which contains the idea of the word group “Lindsay Lohan”, though probably connected.

Whenever you dig the concept of horse or Lindsay Lohan from your long-term memory, your brain sort of primes the concept by loading it and related concepts into short-term memory, so your “rehab” neuron probably fires pretty soon after your Lindsay neuron.  Similarly, our translation program doesn’t keep it’s whole data-set in RAM constatnly, but loads it from whatever our storage medium is, based on what’s connected to our currently loaded portion of the web.

Current MT programs don’t translate like humans do.  No matter what tricks or algorithms they use, it’s all based on manipulating sequences of letters and basically doing math based on a set of equivalences such as “caballo” = “horse”.  Whether they do statistical analysis on corpuses of previously-translated phrases and sentences like Google Translate to find the most likely translation, or a straight0forward dictionary look-up one word at a time, they don’t understand what the text they are matching means in either language, and that’s why current approaches will never be able to compare to a reasonably competent human translator.

It’s also why current “artificial intelligence” programs will never achieve true human-like general intelligence.  So, even your best current chatbot has to use tricks like pretending to be a Ukranian teenager with bad English skills on AIM to pass the so-called Turing test.  A side-walk artist might draw a picture perfect crevasse that seems to plunge deep into the Earth below your feet.  But no matter how real it looks, your elevation isn’t going to change.  A bird can;t nest in a picture of tree, no matter how realistically depicted.

Calling what Google Translate does, or any machine “translation” program does translation has to be viewed in context, or else it’s quite misleading.  Language functions properly only in the proper context, and that’s something statistical approaches to machine translation will never be able to imitate, no matter how many billions of they spend on hardware or algorithm development.  Could you eventually get them to where they can probably usually mostly communicate the gist of a short newspaper article?  Sure.  Will you be able to engage live in witty reparte with your mutually-language exclusive acquaintance over Skype?  Probably not.  Not with the kind of system we have now.

Those crude, our theoretical program with knowledge web described above might take us a step closer, but even if we could perfect and polish it, we’re still a long way from truly useful translation or AI software.  After all, we don;t even understand how we do these things ourselves.  How could we create an artificial version when the natural one still eludes our grasp?

 

Tags: , , , , , , ,

AI and AlphaGo: Why It’s Not the Big Deal It’s Made Out to Be

I’d like to open this post by admitting I am not a Go master.  I’ve played a few times, watch Hikaru no GO when nothing else was on.  But that’s about it.  However, I don’t need to be an expert at the game to point out the flaw in some of the press coverage.  I suspect actual AI researchers already know what I mean.

The first thing to remember is that AlphaGo is a deep-learning program built on a neural network.  What that means is that rather than an artificial intelligence program, AlphaGo is an artificial learning program.  Public perception of AI is still focused on artificial intelligence, but the field has now expanded to cover many related or tangential or component areas of study.  AlphaGo also has some form of reasoning ability.  But this ability is solely related to Go.  You cannot generalize it’s algorithms to other tasks.  In fact, DeepMind even admits there are better programs out there to play Chess.  Chess and Go are both “perfect information”(PI) games.  You can if you so choose know everything about a given game of Chess or Go by looking at the board.  You know all the rules and the position of all the pieces.  PI games are a very popular area of AI research, because programs can do a lot with them.  The information can be reduced to a very small set of states and rules, which is ideal for computers to excel at.  The trick of course is to teach the computer the best set of tactics for taking those rules and the initial state of the game, and trading states with another player to get to the win state.  And yet, even in two PI games, the best AI solution to a player capable of competing with the best of humans is different for each game.

I like to call this specific intelligence, although the more popular terms are weak AI or narrow AI, a kind of non-sentient intelligence focused on solving one task or a narrow range of tasks. But even that is a bit of a misnomer.  After all, the machines aren’t truly smart, just impressively programmed dumb machines.

However, a learning program like AlphaGo comes a bit closer to true intelligence(though not sentience) by being able to take the initially programmed rules and knowledge and extrapolate from them on its own to do things it wasn’t explicitly hard-coded to do by the programmers.  It’s incredibly impressive.  But it’s not “AI” in the way most layfolk think of it.  It’s not general intelligence, even a crude version.  It’s a very sophisticated piece of specific intelligence.

 

 

But there’s a second flaw in the coverage.  Besides the great deal of mystique that’s built up around Go, which isn’t an issue of AI, although some of it is misplaced–for example, another lifeform does not “almost certainly play Go” whereas Chess is too human specific–there’s the issue that even as a powerful example of narrow AI, AlphaGo does not–as stated by some professional players–“play go just like a human but better”.  There has been much talk of its unorthodox tactics, or its algorithm’s focus on win-rate over all else.  Some have even said it made moves “only God could have made”–a common expression of a perfect move.

 

But the real truth is this: much like how genetic code, a style of coding in which a computer is given basic building blocks of code and tasked to mix them up until it finds a closer to optimal solution, AlphaGo has no idea it is playing go.  As far as AlphaGo knows, it’s just trading ones and zeroes around until it finds the desired sequence.  The ways in which a human player attempts to reach the winning board position are inherently different than the way a computer does, because they aren’t really pursuing the same goal.

 

We’re not particularly closer to strong or general AI than we were before.  Go isn’t truly so different from any other PI game.  AlphaGO has not learned intuition.  It’s merely played millions of games of Go subtly adjusting the value it places on a given set of stone positions on the board as it goes until more and more the win-rate increases to the point it wins the game.  Although the process is superficially similar to the way a human learns the game, the lack of framing devices such as vision used by humans has taught it to value entirely different things, and unlike a human, a computer has a perfect memory to go with the perfect information, and it is incapable of making an error.

After that, we can consider the psychological warfare aspect of multi-player games.  AlphaGo may be able to beat anyone Lee Se-dol could, but it cannot judge its opponents experience and thus alter its strategy to beat that player faster or more elegantly.  Instead, it will always play the same way every time, and react no differently to a master making three opening moves than to a novice making the same.  But where a human might see those moves and be able to make a variety of plays depending on their intuition of the players skill or likely next move, AlphaGo will continue to inexorably play exactly the move that will have the highest chance of victory against any and all players, rather than the one with the highest chance of victory against a specific individual.

 
Leave a comment

Posted by on March 15, 2016 in atsiko, Science Fact

 

Tags: , , , , , , , ,