RSS

Your Chatbot Overlord Will See You Now

23 Feb

Science fiction authors consistently misunderstand the concept of AI.  So do AI researchers.  They misunderstand what it is, how it works, and most importantly how it will arise.  Terminator?  Nah.  The infinitely increasing complexity of the Internet?  Hell no.  A really advanced chatbot?  Not in a trillion years.

You see, you can’t get real AI with a program that sits around waiting for humans to tell it what to do.  AI cannot arise spontanteously from the internet, or a really complex military computer system or from even the most sophisticated natural language processing program.

The first mistake is the mistake Alan Turing made with his Turing test.  The same mistake the founder and competitors for the Loebner Prize have made.  The mistake being: language is not thought.  Despite the words you hear in your head as you speak, despite the slowly-growing verisimilitude of chatbot programs, language is and only ever has been the expression of thought and not thought itself.  After all, you can visualize a seen in your head without ever using a single word.  You can remember a sound or a smell or the taste of day-old Taco Bell.  All without using a single word.  A chatbot can never become an AI because it cannot actually think, it can only loosely mimic the linguistic expression of thought through tricks and rote memory of templates that if it’s really advanced may involve plugging in a couple variables taken from the user’s input.  Even chatbats based on neural networks and enormous amounts of training data like Microsoft’s Tay, or Siri/Alexa/Cortana are still just tricks of programming trying to eke out an extra tenth of a percentage point of illusory humanness.  Even IBM’s Watson is just faking it.

Let’s consider for a bit what human intelligence is to give you an idea of what the machines of today are lacking, and why most theories on AI are wrong.  We have language, or the expression of intelligence that so many AI programs are so intent on trying to mimic.  We also have emotions and internal drive, incredibly complex concepts that most current AI is not even close to understanding, much less implementing.  We have long-term and short-term memory, something that’s relatively easy for computers to do, although in a different way than humans–and which there has still been no significant progress on because everyone is so obsessed with neural networks and their ability to complete individual tasks something like 80% as well as a human.  A few, like AlphaGoZero, can actually crush humans into the ground on multiple related tasks–in AGZ’s case, chess-like boardgames.

These are all impressive feats of programming, though the opacity of neural-network black boxes kinda dulls the excitement.  It’s hard to improve reliably on something you don’t really understand.  But they still lack the one of the key ingredients for making a true AI: a way to simulate human thought.

Chatbots are one of two AI fields that focus far too much on expression over the underlying mental processes.  The second is natural language processing(NLP).  This includes such sub-fields as machine translation, sentiment analysis, question-answering, automatic document summarization, and various minor tasks like speech recognition and text-to-speech.  But NLP is little different from chatbots because they both focus on tricks that manipulate the surface expression while knowing relatively little about the conceptual framework underlying it.  That’s why Google Translate or whatever program you use will never be able to match a good human translator.  Real language competence requires understanding what the symbols mean, and not just shuffling them around with fancy pattern-recognition software and simplistic deep neural networks.

Which brings us to the second major lack in current AI research: emotion, sentiment, and preference.  A great deal of work has been done on mining text for sentiment analysis, but the computer is just taking human-tagged data and doing some calculations on it.  It still has no idea what emotions are and so it can only do keyword searches and similar and hope the average values give it a usable answer.  It can’t recognize indirect sentiment, irony, sarcasm, or other figurative language.  That’s why you can get Google Translate to ask where the toilet is, but its not gonna do so hot on a novel, much less poetry or humour.   Real translation is far more complex than matching words and applying some grammar rules, and Machine Translation(MT) can barely get that right 50% of the time.

So we’ve talked about thought vs. language, and the lack of emotional intelligence in current AI.  The third issue is something far more fundamental: drive, motivation, autonomy.  The current versions of AI are still just low-level software following a set of pr-programmed instructions.  They can learn new things if you funnel data through the training system.  They can do things if you tell them to.  They can even automatically repeat certain tasks with the right programming.  But they rely on human input to do their work.  They can’t function on their own, even if you leave the computer or server running.  They can’t make new decisions, or teach themselves new things without external intervention.

This is partially because they have no need.  As long as their machine “body” is powered they keep chugging along.  And they have no ability to affect whether or not it is powered.  They don’t even know they need power, for the most part.  Sure they can measure battery charge and engage sleep mode through the computer’s operating system.  But they have no idea why that’s important, and if I turn the power station off or just unplug the computer, a thousand years of battery life won’t help them plug back in.  Whereas human intelligence is based on the physical needs of the body motivating us to interact with the environment, a computer and the rudimentary “AI” we have now has no such motivation.  It can sit in its resting state for eternity.

Even with an external motivation, such as being coded to collect knowledge or to use robot arms to maintain the pre-designated structure of say a block pyramid or a water and sand table like you might see demonstrating erosion at the science center, an AI is not autonomous.  It’s still following a task given to it by a human.  Whereas no one told human intelligence how to make art or why it’s valuable.  Most animals don’t get it, either.  It’s something we developed on our own outside of the basic needs of survival.  Intelligence helps us survive, but because of it we need things to occupy our time in order to maintain mental health and a desire to live and pass on our genes.  There’s nothing to say that a complete lack of being able to be bored is a no-go for a machine intelligence, of course.  But the ability to conceive and implement new purposes in life is what make human intelligence different from that of animals, whose intelligence may have less raw power but still maintains the key element of motivation that current AI lacks, and which a chatbot or a neural network as we know them today can never achieve, no matter how many computers you give it to run on or TV scripts you give it to analyze.  The fundamental misapprehension of what intelligence is and does by the AI community means they will never achieve a truly intelligent machine or program.

Science fiction writers dodge this lack of understanding by ignoring the technical workings of AI and just making them act like strange humans.  They do a similar thing with alien natural/biological intelligences.  It makes them more interesting and allows them to be agents in our fiction.  But that agency is wallpaper over a completely nonexistent technological understanding of ourselves.  It mimics the expression of our own intelligence, but gives limited insight into the underlying processes of either form.  No “hard science fiction” approach does anything more than a “scientific magic system”.  It’s hard sci-fi because it has fixed rules with complex interactions from which the author builds a plot or a character, but it’s “soft sci-fi” in that these plots and characters have little to do with how AI would function in reality.  It’s the AI equivalent of hyperdrive.  A technology we have zero understanding of and which probably can’t even exist.

Elon Musk can whinge over the evils of unethical AI destroying the world, but that’s just another science fiction trope with zero evidential basis in reality.  We have no idea how an AI might behave towards humans because we still have zero understanding of what natural and artificial intelligences are and how they work.  Much less how the differences between the two would effect “inter-species” co-existence.  So your chatbot won’t be becoming the next HAL or Skynet any time soon, and your robot overlords are still a long way off even if they could exist at all.

 

Tags: , , , , , ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

 
%d bloggers like this: