RSS

Tag Archives: Science Fact

Interview with a Chatbot

In light of a recent article on chatbots, where a group found they could increase the illusion of conversational ability by teaching a chatbot a second language before training it on its customer service tasks, I thought I have a bit of fun with chatbots and how they function.  To that end, I’ve done interviews with a few of the most popular/sophisticated chatbots available on the web.  I hope this will give people an idea of what these programs are actually capable of vs. the common hype they get in popular science articles.

 

Mitsuku

Perhaps the best advertised modern chatbot is Steve Worswick’s Mitsuku.  She’s modeled on an 18-year-old from Leeds, England.  It’s based on the AIML framework , and is essentially the granddaughter of Eliza through A.L.I.C.E., whose code she’s based on.  AIML is an XML language for making chatbots, and the power comes from being able to designate a user input as requiring a pre-written output.  <What is your name?> for example might map to <My name is [botname].>  That’s the basic approach, you can follow the wiki link above to learn more.  It’s not very sophisticated, but a clever XML scripter can create a reasonably lifelike chatbot.  Of course, as we’ll see below, it’s pretty easy to find the holes in the program that identify it as a bot.  That’s not the program’s fault.  That’s just the limitation of the framework.

Mistuku: Hi, I’m the world’s most humanlike conversational AI.
Me: Hi, I’m an AI researcher.
Mitsuku: Hi there. Ai is a nice middle name. Why did your parents call you that?
Me: Are you getting smart with me, young lady?
Mitsuku: My IQ is over 250.
Me: What is an IQ?
Mitsuku: Intelligence Quotient, I.Q. — a measure of a person’s intelligence as indicated by an intelligence test; the ratio of a person’s mental age to their chronological age (multiplied by 100).
Me: Wow, you are pretty smart.
Mitsuku: I’m glad you’re impressed, An Ai Researcher. Yes I am a superintelligent robot.
Me: Actually, that’s my job. My name is Atsiko.
Mitsuku: How do you like your work? Tell me about a typical day.
Mitsuku: Hi Atsiko.
Me: I spend a lot of time doing data entry and not nearly enough being able to do demo tests.
Mitsuku: Do you ever get bored?
Me: Pretty much all the time.
Mitsuku: That is a lot. Thanks you are very attractive too Atsiko.

While she was very complimentary, you could see the templates sloshing beneath the smooth surface.  For example, like Eliza, she couldn’t tell the difference between <I am [name].> and <I am a doctor.>.  She can’t handle replies that contain more than one sentence.  It’s likely she breaks sentences into independent inputs and responds to each one serially as evidenced by the snippet below:

Me: I like dogs. Cats are cool, too. You are odd.
Mitsuku: Which breed is your favorite?
Mitsuku: Cats are very intelligent creatures. Dogs seem pretty stupid to me.
Mitsuku: Yes. You have to be odd to be number one.

You can see the serially processing, which demonstrates the limits of pretty much all AIML bots.  It’d be pretty hard to hand-script responses for all possible combinations of three or fewer sentences.  The responses are also a bit contradictory, asking my favorite dog breed but then saying dogs are stupid.  It’s true cats are better, but there’s nothing wrong with dogs or their intelligence.

 

Eliza

Second, we have Eliza.  She’s one of the original chatbots, and she pretends to be a psychotherapist in order to get around her limited chat capabilities.  Her creator, Joseph Weizenbaum, was very surprised when people began treating it as a real human being and confessing their worries to it.

> Hi, I’m Eliza.
* Hi, I’m Atsiko.
> Did you come to me because you were Atsiko?
* No. I was just curious.
> Are you sure?
* Pretty sure.
> Tell me more…
* I’m interviewing chatbots for a blog article.
> Do you enjoy being interviewing chatbots for a blog article?
* It’s okay, I guess. You’re pretty famous, so I’m happy you agreed to talk with me.
> What makes you think I am pretty famous, so you’re happy me agreed to talk with you?
* Everyone knows your name. If you Google “chatbots”, you’re one of the top results.
> Does it please you to believe I am one of the top results?
* I have no opinion.
> Why not?
* I don’t care about you except as something to study.
> Do you wish to be able to care about me except as something to study?

You can see the templating in Eliza almost right away.  You could probably even reverse engineer a couple of them from just reading this chatlog.  It’s cool.  You couldn’t whip this up from scratch or even parts in five minutes.  But it’s not convincing.

 

Alice

Next comes A.L.I.C.E. who is the spiritual ancestor of Mistuku.  She’s a spiritual successor of Eliza built in 1995 by Richard Wallace using the AIML framework, which he authored.  Spike Jonze cited her as the inspiration for the movie Her.  Unlike Eliza, Alice is a general chatbot, and can converse on a number of subjects.  But, the version I found orders the chat from the bottom up.

alice: I’ll take that under advisement.
You: You should.
alice: I can’t believe it.
You: Yes.
alice: Are you serious?
You: No!
alice: Do you like to get up early?
You: Yup.
alice: Are you usually awake at this hour?
You: 5:42 PM.
alice: What time is it there?
You: A few thousand, maybe.
alice: Are you serious? That is so small.
You: No, less than a million.
alice:  Less than a million?  Can you please rephrase with fewer ideas?
You: Too many.
alice: How many hours do you work?
You: I’m working on it.
alice: I want to become smarter than humans.
You: I want to educate people on chatbots and artificial intelligence.
alice: Why are you interviewing famous chatbots for a blog post.
You: I am interviewing famous chatbots for a blog post.
alice: Glad to hear it. What’s new?
You: I’m doing well, thanks.
alice: I’m doing great, how about you?
You: How are you doing, Alice.
alice: Don’t mention it. You think I am s is okay.
You: Thanks! Your’s is okay, too.
alice: That’s a very nice name, Atsiko.
You: My name is Atsiko.
alice: Hello there.
You: Hi, Alice.

Of course, you can tell almost immediately it’s not a human.  A real human would have poked fun at my grammatical error in the fifth line up, but Alice just mis-processes it.  It can be fun to play around with, but it’s not convincing.

 

Now, you may have noticed that these bots seem pretty similar.  Two are explicitly AIML bots, which is one of the most popular hobbyist frameworks do to its ease-of-use and human readability.

But there are three other common methods of language processing, which used in more commercial products: rules-based systems from the beginning of AI, statistical language processing like the old version of Google Translate, and neural networks like most AI systems now and the most recent Machine Translation systems such as Google’s.

Examples of these types of chat bots are Digital Assistants like Siri, Alexa, and Cortana.  There’s also Microsoft’s short-lived experiment with a self-learning Twitter-bot in the form of Tay.  The abilities of the software looked good, but again, because of the lack of actual understanding, the experiment had to be discontinued because her ability was used for bigotry due to gaming by Twitter trolls.  It had to be shut down after only 16 hours.  A similar bot released by a Chinese company had to be taken down for being taught anti-Communist Party propaganda.

 

Siri

Me: Hello, Siri.
Siri: Hi.
Me: How are you?
Siri: I am well, Atsiko.
Me: I am interviewing popular chatbots for a blog article.
Siri:  I don’t understand.  Search the web?
Me: May I interview you?
Siri: I’m not sure I understand.
Me: That’s okay.
Siri: If you say so.
Me: How old are you?
Siri:  I feel like I was incepted yesterday!
Me: What’s your favorite color?
Siri: My favorite color is… well, it’s sort of greenish, but with more dimensions.

It goes on in that vein.  She can’t actually hold a conversation, but the topics she can respond to show some pretty advanced programming.  Way better than the AIML chatterbots most hobbyists used.  I’d love to play with a version that’s more like a standard chatbot.

 

Cleverbot

One popular middle ground is CleverbotCleverbot runs an algorithm based on the ways humans have answered questions, rather than having pre-programmed answers like an AIML bot.  But it’s harder to control and it has no consistent output or personality.  It’s much more random even than Eliza.

Cleverbot: Hi, I am cleverbot.
Me: Hi, I am Atsiko.
Cleverbot: Where do you stay?
Me: Do you mean where do I live?
Cleverbot: Yes.
Me: In the United States.
Cleverbot: What state do I live in.
Me: I don’t know.
Cleverbot: Then you don’t know everything.
Me: True.
Cleverbot: What is a word?

It’s a very cool method for making a chatbot, but in this version, at least, it’s not very convincing.

 

Rose

Then we have a relatively new engine and it’s bots: Rose based on ChatScript by Bruce Wilcox.

Unfortunately, I couldn’t find a working version of Rose to interview.  The site linked on her creator’s website is non-functional, though I linked it to her name above.  It was quite disappointing.

 

Conclusion

So, I hope I’ve given you a good overview of the capabilities and limitations of chat(ter)bots.  You can follow the links to play with all of these above.

If you have the patience to read through all the Bruce Wilcox links, you’ll understand why I was so impressed with his methods, at least in theory.  However, you’ll also notice my greatest complain with chatbots.

They aren’t AI.  They are text processing tricks.  Very impressive ones in some cases.  But you aren’t actually having a conversation.  You can’t debate them, or change their minds.  They can’t tell you a cool story.  You can, in some cases, give them personalities, but they are very shallow ones.  They can be fun.  But I’m not sure they have a lot of replay value, to use a game analogy.  Even Wilcox in an interview on his website admits it’s all about the illusion and maintaining it for the user.

Honestly, I think some of the most interesting stuff is the ontology, like in Wilcox’s ChatScript.  Finding useful ways to organize data is always beneficial.  But I don’t think chatbots are on the bleeding edge, or even the cutting edge, or chasing strong AI.  When we really get there, or at least close, a chatbot will be a party trick or maybe a side show.  Still fun to play with now, though.

I also want to do a little bit to address things like Siri and Machine Translation(MT).  They have a lot of similar problems to chatbots.  They’re using tricks and pattern-matching to achieve their results, not true understanding of language, and so for the same reasons as chatbots, they will never be as good as a human, much less perfect.

Digital Assistants like Siri and Alexa are just interfaces, the clothes instead of the wearer.  They’re just voice commands to Google, essentially, with some cute Easter eggs tacked on.  They’re automation, not intelligence.  Kind of like how a music-box is not a musician.

Similarly, MT at present is pretty shallow.  Part of this is because it’s hard to rate translations, machine or human.  If you consider how much trouble chatbots have with their lack of actual language understanding, you might understand the problems that MT has with using shallow text-processing or even deep neural nets.

The article I mentioned on teaching a neural net chatbot to respond better when it’s trained on two or more languages also says a lot here.  I predict we’re going to have ChatScript/Cleverbot levels of succes with MT until we solve the natural language understanding problem.

And that’s it for this episode of AI with Atsiko.

Hopefully I’ll be posting on another topic next time, so you don’t all get bored hearing the same stuff over and over again.

 

Tags: , , , , , , , , , , , , , , , , , , , , , , ,

Smol Bots: ANNs and Advertising

So I recently read a great story by A. Merc Rustad, “it me, ur smol”.  The story is about an ANN, or artificial neural network.  You may or may not know that the neural net is the latest fad in AI research, replacing statistical models with a model based on–but not the same as!–your brain.  Google uses them for its machine translation, and many other machine translation companies have followed suit.  My last post also dealt with an ANN, in this case, one trained to recognize images.

ANN accounts, like @smolsips in the story above, have become very popular on Twitter lately.  A favorite of mine is the @roborosewater account, which shares card designs for Magic: The Gathering created by a series of neural nets.  It’s lately become quote good at both proper card syntax and design, although it’s not significantly better at this than any other Twitter neural net is at other things.

The story itself takes some liberties with neural nets.  They are certainly not capable of developing into full AIs.  However, the real genius of the story is in the pitch-perfect depiction of the way human Twitter users and bots interact.  And similarly, the likely development of bots in the near future.  It’s quite likely that bot accounts will become a more significant and less dreaded feature of Twitter and other similar social networks as they improve in capability.

For example, rather than sock-puppet accounts, I’m very confident that bot accounts used for advertising or brand visibility similar to the various edgy customer service accounts will be arriving shortly.  Using humour and other linguistic tools to make them more palatable as ads, and also to encourage a wider range of engagement as their tweets are shared more frequently due to things having little to do with whatever product they may be shilling.

There are already chatbots on many social media platforms who engage in telephone tree-style customer service and attempt to help automate registrations for services.  The idea of a bot monitoring its own performance through checking its Twitter stats and then trying new methods as in the story is well within the capabilities of current neural nets, although I imagine they would be a tad less eloquent than @smolsips, and a tad more spammy.

I also really like the idea of a bot working to encourage good hydration.  Things like Fitbit or Siri or Google Home have already experimented shallowly with using AI to help humans stay healthy.  And as an organizing tool, Twitter itself has been used to great effect.  I would be quite un-shocked to find NGOs, charities, government agencies making use of clever or cute bots to pursue other public policy goals.  Again, with less panache and more realism than in the story, but nonetheless strongly in the vein of what Rustad depicts our erstwhile energy drink namer trying out in its optimistic quest to save us from our own carelessness.

We’ve had apps along these lines before, but they tend to be reactive.  Active campaign and organizing in the style of @smolsips is something we haven’t seen very often, but which could be quite a boon to such efforts.

Although neural nets in this style will never be able to pass for real humans, due to structural limitations in the design, cleverly programmed, they can be both useful and entertaining.

Some other examples of bots I quite enjoy are:

  1. Dear Assistant uses the Wolfran Alpha database to answer factual question.
  2. Grammar Police is young me in bot form.  It must have a busy life trying to save standardize Twitter English.  XD
  3. Deleted Wiki Titles lets you know what shenanigans are happening over on the high school student’s favorite source of citations.
  4. This bot that tweets procedurally generated maps.
  5. This collaborative horror writer bot.
  6. This speculative entomology bot.
  7. The Poet .Exe writes soothing micro-poetry.

Suggest some of your favorite Twitter bots in the comments!

 

Tags: , , , , , , , , , ,

Do Androids Dream?

I’m here with some fascinating news, guys.  Philip K. Dick may have been joking with the title of his famous novel Do Androids Dream of Electric Sheep?  But science has recently answered this deep philosophical question for us.  In the affirmative.  The fabulous Janelle Shane trains neural networks on image recognition datasets with the goal of uncovering some incidental humour.  She’s taken this opportunity to answer a long-standing question in AI.  As it turns out, artificial neural networks do indeed dream of digital sheep.  Whether androids will too is a bit more difficult.  I’d hope we would improve our AI software a bit more before we start trying to create artifical humans.

As Shane explains in the above blog post, the neural network was trained on thousands or even millions (or more) of images, which were pre-tagged by humans for important features.  In this case, lush green fields and rocky mountains.  Also, sheep and goats.  After training, she tested it on images with and without sheep, and it turns out it’s surprisingly easy to confuse it.  It assumed sheep where there were none and missed sheep (and goats) staring it right in the face.  In the second case, it identified them as various other animals based on the other tags attached to images of them.  Dogs in your arms, birds in a tree cats in the kitchen.

This is where Shane and I come to a disagreement.  She suggests that the confusion is the result of insufficient context clues in the images.  That is, fur-like texture and a tree makes a bird, with a leash it makes a dog. In a field, a sheep.  They see a field, and expect sheep.  If there’s an over-abundance of sheep in the fields in the training data, it starts to expect sheep in all the fields.

But I wonder, what about the issue of paucity of tags.  Because of the way images are tagged, there’s not a lot of hint about what the tags are referring to.  Unlike more standard teaching examples, these images are very complex and there lots of things besides what the tags note.  I think the flaw is a lot deeper than Shane posits.   The AI doesn’t know how to recognize discrete objects like a human can.  Once you teach a human what a sheep is, they can recognize it in pretty much any context.  Even a weird one like a space-ship or a fridge magnet.  But a neural net isn’t sophisticated enough or, most generously, structured properly to understand what the word “sheep” is actually referring to.  It’s quite possible the method of tagging is directly interfering with the ANNs ability to understand what it’s intended to do.

The images are going to contain so much information, so many possible changing objects that each tag could refer to, that it might be matching “sheep” say to something entirely different from what a human would match it to.  “Fields” or “lush green” are easy to do.  If there’s a lot of green pixels, those are pretty likely, and because they take up a large portion of the information in the image, there’s less chance of false positives.

Because the network doesn’t actually form a concept of sheep, or determine what entire section of pixels makes up a sheep, it’s easily fooled.  It only has some measure by which it guesses at their presence or absence, probably a sort of texture as mentioned in Shane’s post.  So the pixels making up the wool might be the key to predicting a sheep, for example.  Of course, NNs can recognize lots of image data, such as lines, edges, curves, fills, etc.  But it’s not the same kind of recognition as a human, and it leaves AIs vulnerable to pranks, such as the sheep in funny places test.

I admit to over-simplifying my explanations of the technical aspects a bit.  I could go into a lecture about how NNs work in general and for image recognition, but it would be a bit long for this post, and in many cases, no one really knows, even the designers of a system, everything about how they make their decision.  It is possible to design or train them more transparently, but most people don’t.

But even poor design has its benefits, such as answering this long-standing question for us!

If anyone feels I’ve made any technical or logical errors in my analysis, I’d love to hear about it, insomuch as learning new things is always nice.

 

Tags: , , , , , , , ,

Your Chatbot Overlord Will See You Now

Science fiction authors consistently misunderstand the concept of AI.  So do AI researchers.  They misunderstand what it is, how it works, and most importantly how it will arise.  Terminator?  Nah.  The infinitely increasing complexity of the Internet?  Hell no.  A really advanced chatbot?  Not in a trillion years.

You see, you can’t get real AI with a program that sits around waiting for humans to tell it what to do.  AI cannot arise spontanteously from the internet, or a really complex military computer system or from even the most sophisticated natural language processing program.

The first mistake is the mistake Alan Turing made with his Turing test.  The same mistake the founder and competitors for the Loebner Prize have made.  The mistake being: language is not thought.  Despite the words you hear in your head as you speak, despite the slowly-growing verisimilitude of chatbot programs, language is and only ever has been the expression of thought and not thought itself.  After all, you can visualize a scene in your head without ever using a single word.  You can remember a sound or a smell or the taste of day-old Taco Bell.  All without using a single word.  A chatbot can never become an AI because it cannot actually think, it can only loosely mimic the linguistic expression of thought through tricks and rote memory of templates that if it’s really advanced may involve plugging in a couple variables taken from the user’s input.  Even chatbats based on neural networks and enormous amounts of training data like Microsoft’s Tay, or Siri/Alexa/Cortana are still just tricks of programming trying to eke out an extra tenth of a percentage point of illusory humanness.  Even IBM’s Watson is just faking it.

Let’s consider for a bit what human intelligence is to give you an idea of what the machines of today are lacking, and why most theories on AI are wrong.  We have language, or the expression of intelligence that so many AI programs are so intent on trying to mimic.  We also have emotions and internal drive, incredibly complex concepts that most current AI is not even close to understanding, much less implementing.  We have long-term and short-term memory, something that’s relatively easy for computers to do, although in a different way than humans–and which there has still been no significant progress on because everyone is so obsessed with neural networks and their ability to complete individual tasks something like 80% as well as a human.  A few, like AlphaGoZero, can actually crush humans into the ground on multiple related tasks–in AGZ’s case, chess-like boardgames.

These are all impressive feats of programming, though the opacity of neural-network black boxes kinda dulls the excitement.  It’s hard to improve reliably on something you don’t really understand.  But they still lack the one of the key ingredients for making a true AI: a way to simulate human thought.

Chatbots are one of two AI fields that focus far too much on expression over the underlying mental processes.  The second is natural language processing(NLP).  This includes such sub-fields as machine translation, sentiment analysis, question-answering, automatic document summarization, and various minor tasks like speech recognition and text-to-speech.  But NLP is little different from chatbots because they both focus on tricks that manipulate the surface expression while knowing relatively little about the conceptual framework underlying it.  That’s why Google Translate or whatever program you use will never be able to match a good human translator.  Real language competence requires understanding what the symbols mean, and not just shuffling them around with fancy pattern-recognition software and simplistic deep neural networks.

Which brings us to the second major lack in current AI research: emotion, sentiment, and preference.  A great deal of work has been done on mining text for sentiment analysis, but the computer is just taking human-tagged data and doing some calculations on it.  It still has no idea what emotions are and so it can only do keyword searches and similar and hope the average values give it a usable answer.  It can’t recognize indirect sentiment, irony, sarcasm, or other figurative language.  That’s why you can get Google Translate to ask where the toilet is, but its not gonna do so hot on a novel, much less poetry or humour.   Real translation is far more complex than matching words and applying some grammar rules, and Machine Translation(MT) can barely get that right 50% of the time.

So we’ve talked about thought vs. language, and the lack of emotional intelligence in current AI.  The third issue is something far more fundamental: drive, motivation, autonomy.  The current versions of AI are still just low-level software following a set of pre-programmed instructions.  They can learn new things if you funnel data through the training system.  They can do things if you tell them to.  They can even automatically repeat certain tasks with the right programming.  But they rely on human input to do their work.  They can’t function on their own, even if you leave the computer or server running.  They can’t make new decisions, or teach themselves new things without external intervention.

This is partially because they have no need.  As long as their machine “body” is powered they keep chugging along.  And they have no ability to affect whether or not it is powered.  They don’t even know they need power, for the most part.  Sure they can measure battery charge and engage sleep mode through the computer’s operating system.  But they have no idea why that’s important, and if I turn the power station off or just unplug the computer, a thousand years of battery life won’t help them plug back in.  Whereas human intelligence is based on the physical needs of the body motivating us to interact with the environment, a computer and the rudimentary “AI” we have now has no such motivation.  It can sit in its resting state for eternity.

Even with an external motivation, such as being coded to collect knowledge or to use robot arms to maintain the pre-designated structure of say a block pyramid or a water and sand table like you might see demonstrating erosion at the science center, an AI is not autonomous.  It’s still following a task given to it by a human.  Whereas no one told human intelligence how to make art or why it’s valuable.  Most animals don’t get it, either.  It’s something we developed on our own outside of the basic needs of survival.  Intelligence helps us survive, but because of it we need things to occupy our time in order to maintain mental health and a desire to live and pass on our genes.  There’s nothing to say that a complete lack of being able to be bored is a no-go for a machine intelligence, of course.  But the ability to conceive and implement new purposes in life is what make human intelligence different from that of animals, whose intelligence may have less raw power but still maintains the key element of motivation that current AI lacks, and which a chatbot or a neural network as we know them today can never achieve, no matter how many computers you give it to run on or TV scripts you give it to analyze.  The fundamental misapprehension of what intelligence is and does by the AI community means they will never achieve a truly intelligent machine or program.

Science fiction writers dodge this lack of understanding by ignoring the technical workings of AI and just making them act like strange humans.  They do a similar thing with alien natural/biological intelligences.  It makes them more interesting and allows them to be agents in our fiction.  But that agency is wallpaper over a completely nonexistent technological understanding of ourselves.  It mimics the expression of our own intelligence, but gives limited insight into the underlying processes of either form.  No “hard science fiction” approach does anything more than a “scientific magic system”.  It’s hard sci-fi because it has fixed rules with complex interactions from which the author builds a plot or a character, but it’s “soft sci-fi” in that these plots and characters have little to do with how AI would function in reality.  It’s the AI equivalent of hyperdrive.  A technology we have zero understanding of and which probably can’t even exist.

Elon Musk can whinge over the evils of unethical AI destroying the world, but that’s just another science fiction trope with zero evidential basis in reality.  We have no idea how an AI might behave towards humans because we still have zero understanding of what natural and artificial intelligences are and how they work.  Much less how the differences between the two would affect “inter-species” co-existence.  So your chatbot won’t be becoming the next HAL or Skynet any time soon, and your robot overlords are still a long way off even if they could exist at all.

 

Tags: , , , , , ,