RSS

Category Archives: Science Fact

Poetry, Language, and Artificial Intelligence

Poetry exemplifies how the meaning of a string of words depends not only upon the sum of the meaning of the words, or on the order in which they are placed, but also upon something we call “context”.  Context is essentially the concept that single word (or idea) has a different meaning depending on its surroundings.  These surroundings could be linguistic–the language we are assuming the word to belong to, for example, environmental–say it’s cold out and I say “It’s sooooooo hot.”, or in light of recent events: “The Mets suck” means something very different if they’ve just won a game than if they’ve just lost one.

Poetry is the art of manipulating the various possible contexts to get across a deeper or more complex meaning than the bare string of words itself could convey.  The layers of meaning are infinitely deep, and in fact in any form of creative  writing, it is demonstrably impossible for every single human to understand all of them.  I say poetry is the “art” of such manipulation because it is most often the least subtle about engaging in it.  All language acts manipulate context.  Just using a simple pronoun is manipulating context to express meaning.

And we don’t decode this manipulation separate from decoding the bare language.  It happens as a sort of infinite feedback loop, working on all the different layers of an utterance at once.  The ability to both manipulate concepts infinitely and understand our own infinite manipulations might be considered the litmus test for what is considered “intelligent” life.

 

Returning to the three words in our title, I’ve discussed everything but AI.  The difficulty in creating AGI, or artificial general intelligence lies in the fact that nature had millions or billions of years to sketch out and color in the complex organic machine that grants humans this power of manipulation.  Whereas humans have had maybe 100?  In a classic chicken and egg problem, it’s quite difficult to have either the concept web or the system that utilizes it without the other part.  If the system creates the web, how do you know how to code the system without knowing the structure of the web?  And if the web comes first, how can you manipulate it without the complete system?

You might have noticed a perfect example of how context affects meaning in that previous paragraph.  One that was not intentional, but that I noticed as I went along. “Chicken and egg problem”.  You  can’t possibly know what I meant by that phrase without having previously been exposed to the philosophical question of which came first, the chicken that laid the egg, or the egg the chicken hatched from.  But once you do know about the debate, it’s pretty easy to figure out what I meant by “chicken and egg problem”, even though in theory you have infinite possible meanings.

How in the world are you going to account for every single one of those situations when writing an AI program?  You can’t.  You have to have a system based on very general principles that can deduce that connection from first principles.

 

Although I am a speculative fiction blogger, I am still a fiction blogger.  So how do this post relate to fiction?  When  writing fiction you are engaging in the sort of context manipulation I’ve discussed above as such an intractable problem for AI programmers.  Because you are an intelligent being, you can instinctually engage in it when writing, but unless you are  a rare genius, you are more likely needing to engage in it explicitly.  Really powerful writing comes from knowing exactly what context an event is occurring in in the story and taking advantage of that for emotional impact.

The death of a main character is more moving because you have the context of the emotional investment in that character from the reader.  An unreliable narrator  is a useful tool in a story because the truth is more surprising either  when the character knew it and purposefully didn’t tell the reader, or neither of them knew it, but it was reasonable given the  information both had.  Whereas if the truth is staring the reader in the face but the character is clutching the idiot ball to advance the plot, a readers reaction is less likely to be shock or epiphany and more likely to be “well,duh, you idiot!”

Of course, context can always go a layer deeper.  If there are multiple perspectives in the story, the same situation can lead to a great deal of tension because the reader knows the truth, but also knows there was no way this particular character could.  But you can also fuck that up and be accused of artificially manipulating events for melodrama, like if a simple phone call could have cleared up the misunderstanding but you went to unbelievable lengths to prevent it even though both characters had cell phones and each others’ numbers.

If the only conceivable reason the call didn’t take place was because the author stuck their nose in to prevent it, you haven’t properly used or constructed  the context for the story.  On the other hand, perhaps there was an unavoidable reason one character lost their phone earlier in the story, which had sufficient connection to  other important plot events to be not  just an excuse to avoid the plot-killing phone-call.

The point being that as I said before, the  possible contexts for language or events are infinite.  The secret to good writing  lies in being able to judge which contexts are most relevant and making sure that your story functions reasonably within those contexts.  A really, super-out-of-the-way solution to a problem being ignored is obviously a lot more acceptable than ignoring the one staring you in the face.  Sure your character might be able to send a morse-code warning message by hacking the electrical grid and blinking the power to New York repeatedly.  But I suspect your readers would be more likely to call you out for solving the communication difficulty that way than for not solving it with the characters’ easily  reachable cell phone.

I mention the phone thing because currently, due to rapid technological progress, contexts are shifting far  more rapidly than they did in the past.  Plot structures honed for centuries based on a lack of easy long-range communication are much less serviceable as archetypes now that we have cell phones.  An author who grew up before the age of ubiquitous smart-phones for your seven-year-old is going to have a lot more trouble writing a believable contemporary YA romance than someone who is turning twenty-two in the next three months.  But even then, there’s a lack of context-verified, time-tested plot structures to base such a story on than a similar story set in the 50s.  Just imagine how different Romeo and Juliet would have been if they could have just sent a few quick texts.

In the past, the ability of the characters to communicate at all was a strong driver of plots.  These days, it’s far more likely that trustworthiness of communication will be a central plot point.  In the past, the possible speed of travel dictated the pacing of many events.  That’s  far less of an issue nowadays. More likely, it’s a question of if you missed your flight.  Although…  the increased speed of communication might make some plots more unlikely, but it does counteract to some extent the changes in travel speed.  It might be valuable for your own understanding and ability to manipulate context to look at some works in older settings and some works in newer ones and compare how the authors understanding of context increased or decreased the impact and suspension of disbelief for the story.

Everybody has some context for your 50s love story because they’ve been exposed to past media depicting it.  And a reader is less likely to criticize shoddy contextualizing in when they lack any firm context of their own.   Whereas of course an expert on horses is far more likely to find and be irritated by mistakes in your grooming and saddling scenes than a kid born 16 years ago is to criticize a baby-boomer’s portrayal of the 60s.

I’m going to end this post with a wish for more stories–both SpecFic and YA–more strongly contextualized in the world of the last 15 years.  There’s so little of it, if you’re gonna go by my high standards.

 

Tags: , , , , , ,

AI, Academic Journals, and Obfuscation

A common complaint about the structure for publishing and distributing academic journals is that it is designed in such a way that it obfuscates and obscures the true bleeding edge of science and even the humanities.  Many an undergrad has complained about how they found a dozen sources for their paper, but that all but two of them were behind absurd paywalls.  Even after accounting for the subscriptions available to them through their school library.  One of the best arguments for the fallacy that information wants to be free is the way in which academic journals prevent the spread of potentially valuable information and make it very difficult for the indirect collaboration between multiple researchers that likely would lead to the fastest advances of our frontier of knowledge.

In the corporate world, there is the concept of the trade secret.  It’s basically a form of information that creates the value in the product or the lower cost of production a specific corporation which provides that corporation with a competitive edge over other companies in its field.  Although patents and trade secret laws provide incentive for companies to innovate and create new products, the way academic journals are operated hinders innovation and advancement without granting direct benefits to the people creating the actual new research. Rather, it benefits instead the publishing company whose profit is dependent on the exclusivity of the research, rather than the value of the research itself to spur scientific advancement and create innovation.

Besides the general science connection, this issue is relevant to a blog like the Chimney because of the way it relates to science fiction and the plausibility and/or obsolescence of the scientific  or world-building premise behind the story.

Many folks who work  in the hard sciences (or even the social sciences) have an advantage in the premise department, because they have knowledge and the ability to apply it at a level an amateur or  a generalist is unlikely to be able to replicate.  Thus, many generalists or plain-old writers who work in science fiction make use of a certain amount of handwavium in their scientific and technological world-building.  Two of the most common examples of this are in the areas of faster-than-light(FTL) travel (and space travel in general) and artificial intelligence.

I’d like to argue that there are three possible ways to deal with theoretical or futuristic technology in the premise of  an SF novel:

  1. To as much as possible research and include in your world-building and plotting the actual way in which a technology works and is used, or  the best possible guess based on current knowledge of how such a technology could likely work and be used.  This would include the possibility of having actual plot elements based on quirks inherent in a given implementation.  So if your FTL engine has some side-effect, then the world-building and the plot would both heavily incorporate that side-effect.  Perhaps some form of radiation with dangerous effects both dictates the design of your ships and the results of the radiation affecting humans dictates some aspect of the society that uses these engines (maybe in comparison to a society using another method?)  Here you are  firmly in “hard” SF territory and are trying to “predict the future” in some sense.
  2. To say fuck it and leave the mechanics of your ftl mysterious, but have it there to make possible some plot element, such as fast travel and interstellar empires.  You’ve got a worm-hole engine say, that allows your story, but you don’t delve into or completely ignore how such a device might cause your society to differ from the present  world.  The technology is a narrative vehicle rather than itself the reason for the story.  In (cinematic) Star Wars, for example, neither the Force nor hyper-drive are explained in any meaningful way, but they serve to make the story possible.
  3. A sort of mix between the two involves  obviously handwavium technology, but with a set of rules which serve to drive the story. While the second type is arguably not true speculative fiction, but just utilizes the trappings for drama’s sake, this type is speculative, but within a self-awarely unrealistic premise.

 

The first type of SF often suffers from becoming dated, as the theory is disproven, or a better alternative is found.  This also leads to a possible forth type, so-called retro-futurism, wherein an abandoned form of technology is taken beyond it’s historical application, such as with steampunk.

And therein lies a prime connection between our two topics:  A\a technology used in a story may already be dated without the author even knowing about it.  This could be because they came late to the trend  and haven’t caught on to it’s real-world successor; it could also be because an academic paywall or a company on the brink of releasing a new product has kept the advancement private from the layperson, which many authors are.

Readers may be surprised to find that there’s a very recent real-world example of this phenomenon: Artificial Intelligence.  Currently, someone outside the field but who may have read up on the “latest advances” for various reasons might be lead to believe that deep-learning, neural networks, and  statistical natural language processing are the precursors or even the prototype technologies that will bring about real general/human-like artificial intelligence, either  in the near or far future.

That can be forgiven pretty  easily, since the real precursor to AI is sitting behind a massive build-up of paywalls and corporate trade secrets.  While very keen individuals may have heard of the “memristor”, a sort of circuit capable of behavior  similar to a neuron, this is a hardware innovation.  There is  speculation that modified memristors might be able to closely model the activity of the brain.

But there is already a software solution: the content-agnostic relationship  mapping, analysis, formatting, and translation engine.  I doubt anyone reading this blog has ever heard of it.  I would indeed be surprised if anyone at Google or Microsoft had, either.  In fact, I only know it it by chance, myself. A friend I’ve been doing game design with on and off for the past few years told me about it while we were discussing the AI  model used in the HTML5 tactical-RPG Dark Medallion.

Content-agnostic relationship mapping is a sort of neuron simulation technology that permits a computer program to learn and categorize concept-models in a way that is similar to how humans do, and is basically the data-structure underlying  the software “stack”.  The “analysis” part refers to the system and algorithms used to review and perform calculations based on input from the outside world.  “Formatting” is the process of  turning the output of the system into intelligible communication–you might think of this as analogous to language production.  Just like human thoughts, the way this system “thinks” is not  necessarily all-verbal.  It can think in sensory input models just like a person: images, sounds, smells, tastes, and also combine these forms of data into complete “memories”.  “Translation” refers to the process of converting the stored information from the underlying relationship map into output mediums: pictures, text, spoken language, sounds.

“Content agnostic” means that the same data structures can store any type of content.  A sound, an image, a concept like “animal”: all of these can be stored in the same type of data structure, rather than say storing visual information as actual image files or sounds as audio files.  Text input is understood and stored in these same structures, so that the system does not merely analyze and regurgitate text-files like the current statistical language processing systems or use plug and play response templates like a chat-bot.  Further, the system is capable of output in any language it has learned, because the internal representations of knowledge are not stored in any one language such as English.  It’s not translation, but rather spontaneous generation of speech.

It’s debatable whether this system is truly intelligent/conscious, however.  It’s not going to act like a real human.  As far as I understand it, it possesses no driving spirit like a human, which might cause it to act on its own.  It merely responds to commands from a human.  But I suspect that such an advancement is not far away.

Nor is there an AI out there that can speak a thousand human languages and program new AIs, or write novels.  Not yet, anyway.  (Although apparently they’ve developed it to the point where it can read a short story and answer questions about it, like the names of the main characters or the setting. ) My friend categorized this technology as somewhere between an alpha release and a beta release, probably closer to alpha.

Personally, I’ll be impressed if they can just get it reliably answering questions/chatting in English and observably learning and integrating new things into its model of the world.  I saw some screenshots and a quick video of what I’ll call an fMRI equivalent, showing activation of the individual simulated “neurons”* and  of the entire “brain” during some low-level tests.  Wikipedia seems to be saying the technical term is “gray-box testing”, but since I have no formal software-design training, I can’t say if I’m mis-uderstanding that term or not.   Basically, they have zoomable view of the relationship map, and when the program is activating the various nodes, they light on the screen.   So, if you ask the system how many legs a cat has, the node for cat will light up, followed by the node for “legs”, and maybe the node for “possession”.  Possibly other nodes for related concepts, as well.  None of the images I saw actually labelled the nodes at the level of zoom shown, nor do I have a full understanding of how the technology works.  I couldn’t tell anyone enough for them to reproduce it, which I suppose is the point, given that if this really is a useable technique for creating AIs, it’s probably worth more than the blog-platform I’m writing this on or maybe even all of  Google.

 

Getting back to our original topic, while this technology certainly seemed impressive to me, it’s quite possible it’s just another garden path technology like I believe statistical natural language processing to be.  Science fiction books with clear ideas of how AI works will work are actually quite few and far between.  Asimov’s Three Laws, for example, are not about how robot brains work, but rather about  higher-level things like will AI want to harm us.  In light of what I’ve argued above, perhaps that’s the wisest course.  But then again, plenty of other fields  and technologies are elaborately described in SF stories, and these descriptions used to restrict and/or drive the plot and the actions of the characters.

If anyone does have any books recommendations that do get into the details of how AI works in the story’s world,I would love to read some.

 

Tags: , , , , , , , ,

Machine “Translation” and What Words Mean in Context

One of the biggest commonly known flaws of mahcine translation is a computer’s inability to understand differing meaning in context.  After all, a machine doesn’t know what a “horse” is.  It knows that “caballo” has (roughly) the same meaning in Spanish as “horse” does in English.  But it doesn’t know what that meaning is.

And it certainly doesn’t know what it means when we say that someone has a “horse-face”(/”face like a horse”).

 

But humans can misunderstand meaning in context, too.  For example, if you don’t know how “machine translation” works, you’d think that machines could actually translate or produce translations.  You would be wrong.  What a human does to produce a translation is not the same as what a machine does to produce a “translation”.  That’s why machine and human translators make different mistakes when trying to render the original meaning in the new language.

 

A human brain converts words from the source language into meaning and the meaning back into words in the target language.  A computer converts words from the source language directly to words in the target language, creating a so-called “literal” translation.  A computer would suck at translating a novel, because the figures of speech that make prose (or poetry) what they are are incomprehensible to a machine.  Machine translation programs lack the deeply associated(inter-connected) knowledge base that humans use when producing and interpreting language.

 

A more realistic machine translation(MT) program would require an information web with connections between concepts, rather than words, such that the concept of horse would be related to the concepts of leg, mane, tail, rider, etc, without any intervening linguistic connection.

Imagine a net of concepts represented as data objects.  These are connected to each other in an enormously complex web.  Then, separately, you have a net of linguistic objects, such as words and grammatical patterns, which are overlaid on the concept net, and interconnected.  The objects representing the words for “horse” and “mane” would not have a connection, but the objects representing the concept of meaning underlying these words would have, perhaps, a “has-a” connection, also represented by a connection or “association” object.

In order to translate between languages like a human would, you need your program to have an approximation of human understanding.  A famous study suggested that in the brain of a human who knows about Lindsay Lohan, there’s an actual “Lindsay” neuron, which lights up whenever you think about Lindsay Lohan.  It’s probably lighting up right now as you read this post.  Similarly, in our theoretical machine translation program information “database”, you have a “horse” “neuron” represented by our concept object concept that I described above.  It’s separate from our linguistic object neuron which contains the idea of the word group “Lindsay Lohan”, though probably connected.

Whenever you dig the concept of horse or Lindsay Lohan from your long-term memory, your brain sort of primes the concept by loading it and related concepts into short-term memory, so your “rehab” neuron probably fires pretty soon after your Lindsay neuron.  Similarly, our translation program doesn’t keep it’s whole data-set in RAM constatnly, but loads it from whatever our storage medium is, based on what’s connected to our currently loaded portion of the web.

Current MT programs don’t translate like humans do.  No matter what tricks or algorithms they use, it’s all based on manipulating sequences of letters and basically doing math based on a set of equivalences such as “caballo” = “horse”.  Whether they do statistical analysis on corpuses of previously-translated phrases and sentences like Google Translate to find the most likely translation, or a straight0forward dictionary look-up one word at a time, they don’t understand what the text they are matching means in either language, and that’s why current approaches will never be able to compare to a reasonably competent human translator.

It’s also why current “artificial intelligence” programs will never achieve true human-like general intelligence.  So, even your best current chatbot has to use tricks like pretending to be a Ukranian teenager with bad English skills on AIM to pass the so-called Turing test.  A side-walk artist might draw a picture perfect crevasse that seems to plunge deep into the Earth below your feet.  But no matter how real it looks, your elevation isn’t going to change.  A bird can;t nest in a picture of tree, no matter how realistically depicted.

Calling what Google Translate does, or any machine “translation” program does translation has to be viewed in context, or else it’s quite misleading.  Language functions properly only in the proper context, and that’s something statistical approaches to machine translation will never be able to imitate, no matter how many billions of they spend on hardware or algorithm development.  Could you eventually get them to where they can probably usually mostly communicate the gist of a short newspaper article?  Sure.  Will you be able to engage live in witty reparte with your mutually-language exclusive acquaintance over Skype?  Probably not.  Not with the kind of system we have now.

Those crude, our theoretical program with knowledge web described above might take us a step closer, but even if we could perfect and polish it, we’re still a long way from truly useful translation or AI software.  After all, we don;t even understand how we do these things ourselves.  How could we create an artificial version when the natural one still eludes our grasp?

 

Tags: , , , , , , ,

AI and AlphaGo: Why It’s Not the Big Deal It’s Made Out to Be

I’d like to open this post by admitting I am not a Go master.  I’ve played a few times, watch Hikaru no GO when nothing else was on.  But that’s about it.  However, I don’t need to be an expert at the game to point out the flaw in some of the press coverage.  I suspect actual AI researchers already know what I mean.

The first thing to remember is that AlphaGo is a deep-learning program built on a neural network.  What that means is that rather than an artificial intelligence program, AlphaGo is an artificial learning program.  Public perception of AI is still focused on artificial intelligence, but the field has now expanded to cover many related or tangential or component areas of study.  AlphaGo also has some form of reasoning ability.  But this ability is solely related to Go.  You cannot generalize it’s algorithms to other tasks.  In fact, DeepMind even admits there are better programs out there to play Chess.  Chess and Go are both “perfect information”(PI) games.  You can if you so choose know everything about a given game of Chess or Go by looking at the board.  You know all the rules and the position of all the pieces.  PI games are a very popular area of AI research, because programs can do a lot with them.  The information can be reduced to a very small set of states and rules, which is ideal for computers to excel at.  The trick of course is to teach the computer the best set of tactics for taking those rules and the initial state of the game, and trading states with another player to get to the win state.  And yet, even in two PI games, the best AI solution to a player capable of competing with the best of humans is different for each game.

I like to call this specific intelligence, although the more popular terms are weak AI or narrow AI, a kind of non-sentient intelligence focused on solving one task or a narrow range of tasks. But even that is a bit of a misnomer.  After all, the machines aren’t truly smart, just impressively programmed dumb machines.

However, a learning program like AlphaGo comes a bit closer to true intelligence(though not sentience) by being able to take the initially programmed rules and knowledge and extrapolate from them on its own to do things it wasn’t explicitly hard-coded to do by the programmers.  It’s incredibly impressive.  But it’s not “AI” in the way most layfolk think of it.  It’s not general intelligence, even a crude version.  It’s a very sophisticated piece of specific intelligence.

 

 

But there’s a second flaw in the coverage.  Besides the great deal of mystique that’s built up around Go, which isn’t an issue of AI, although some of it is misplaced–for example, another lifeform does not “almost certainly play Go” whereas Chess is too human specific–there’s the issue that even as a powerful example of narrow AI, AlphaGo does not–as stated by some professional players–“play go just like a human but better”.  There has been much talk of its unorthodox tactics, or its algorithm’s focus on win-rate over all else.  Some have even said it made moves “only God could have made”–a common expression of a perfect move.

 

But the real truth is this: much like how genetic code, a style of coding in which a computer is given basic building blocks of code and tasked to mix them up until it finds a closer to optimal solution, AlphaGo has no idea it is playing go.  As far as AlphaGo knows, it’s just trading ones and zeroes around until it finds the desired sequence.  The ways in which a human player attempts to reach the winning board position are inherently different than the way a computer does, because they aren’t really pursuing the same goal.

 

We’re not particularly closer to strong or general AI than we were before.  Go isn’t truly so different from any other PI game.  AlphaGO has not learned intuition.  It’s merely played millions of games of Go subtly adjusting the value it places on a given set of stone positions on the board as it goes until more and more the win-rate increases to the point it wins the game.  Although the process is superficially similar to the way a human learns the game, the lack of framing devices such as vision used by humans has taught it to value entirely different things, and unlike a human, a computer has a perfect memory to go with the perfect information, and it is incapable of making an error.

After that, we can consider the psychological warfare aspect of multi-player games.  AlphaGo may be able to beat anyone Lee Se-dol could, but it cannot judge its opponents experience and thus alter its strategy to beat that player faster or more elegantly.  Instead, it will always play the same way every time, and react no differently to a master making three opening moves than to a novice making the same.  But where a human might see those moves and be able to make a variety of plays depending on their intuition of the players skill or likely next move, AlphaGo will continue to inexorably play exactly the move that will have the highest chance of victory against any and all players, rather than the one with the highest chance of victory against a specific individual.

 
Leave a comment

Posted by on March 15, 2016 in atsiko, Science Fact

 

Tags: , , , , , , , ,