RSS

Author Archives: atsiko

Why Is A Picture Worth a Thousand Words? Information Density in Various Media

You’ve obviously heard the the phrase “a picture is worth a thousand words”, and you probably even have an idea why we say that.  But rarely do people delve deeply into the underlying reasons for this truth.  And those reasons can be incredibly useful to know.  They can tell you a lot about why we communicate they way we do, how art works, and why it’s so damn hard to get a decent novel adaption into theaters.

I’m going to be focusing mostly on that last complaint in this post, but what I’m talking about has all sorts of broad applications to things like good communication at work, how to tell a good story or joke, and how to function best in society.

So, there’s always complaints about how the book or the comic book, or whatever the original was is better than the movie.  Or the other way around.  And that’s because different artistic media have different strengths in terms of how they convey information.  There are two reasons for this:

  1. Humans have five “senses”.  Basically, there are five paths through which we receive information from the world outside our heads.  The most obvious one is sight, closely followed by sound.  Arguably, touch(which really involves multiple sub-senses, like heat and cold and pain) is the third most important sense, and, in general, taste and smell are battling it out for fourth place.  This is an issue of “kind”.
  2. The second reason has to do with what I’m calling information density.  Basically, how much information a sense can transmit to our brains in how much time.  This is an issue of “degree”.  Sight, at least form humans, probably has the highest information density.  It gives is the most information per unit of time.

So how does that effect the strengths of various media?  After all, both movies and text mostly enter our brain through sight.  You see what’s on the screen and what’s on the page.  And neither can directly transmit information about touch, smell, or taste.

The difference is in information density.  Movies can transmit visual information(and audio) directly to our brains.  But text has to be converted into visual imagery in the brain, and it also takes a lot of text to convey a single piece of visual information.

AI, in the form of image recognition software, is famously bad at captioning photos.  Not only does it do a crappy job of recognizing what is in a picture, but it does a crappy job of summarizing it in text.  But really, could a human do any better?  Sure, you are way better than a computer at recognizing a dog.  But what about captioning?  It takes you milliseconds at most to see a dog in the picture and figure out it is jumping to catch the frisbee.  You know that it’s a black lab, and that it’s in the woods, probably around 4 in the afternoon, and that it’s fall because there’s no leaves on the trees, and it must have rained because there are puddles everywhere, and that…

And now you’ve just spent several seconds at least reading my haphazard description.  A picture is worth a thousand words because it takes a relatively longer amount of time for me to portray the same information in a text description.  In fact, it’s probably impossibly for me to convey all the same information in text.  Just imagine trying to write out every single bit of information explicitly shown in a half-hour cartoon show in text.  It would probably take several novels’ worth of words, and take maybe even days to read.  No one would read that book.  But we have no problem watching TV shows and movies.

Now go back and imagine our poor AI program trying to figure out the important information in the photo of the dog and how to best express it in words.  Yikes.  But as a human, you might pretty quickly decide that “a dog catches a frisbee” adequately describes the image.  Still takes longer than just seeing a picture, but isn’t all that much time or effort.  But, you’re summarizing.  A picture cannot summarize and really has no reason to.  With text(words) you have to summarize.  There’s pretty much no way around it.  So you lose an enormous amount of detail.

So, movies can’t summarize, and books must summarize.  Those are two pretty different constrains on the media in question.  Now, imagine a a radio play.  It’s possible you’ve never heard one.  It’s not the same as an audiobook, despite communicating through the same sense(audio), and it has some serious advantages over books and audiobooks.  You don’t have to worry about conveying dialogue, or sound information because you can do that directly.  Emotion, accents, sound effects.  But of course you can convey visual information like a movie, and unlike in a book or an audiobook, it’s a lot more difficult to just summarize, because you’d have to have a narrator or have the characters include it in dialogue.  So raw text still has some serious advantages based on the conventions of the form.  Similarly, radio dramas/audio plays/pod casts and movies both have to break convention to include character thoughts in storytelling, while books don’t.

So, audio and television media have major advantages in their specific areas than text, but text is in general far more flexible in making up for any short-comings.  And, it can take advantage of the summary nature of the medium when there’s a lot of unnecessary information.  Plus, it can count on the reader to be used to filling in details with their imagination.

Film and radio can’t do that.  They can use montages, cuts, and voiceovers to try and imitate what text can do, but it’s never quite the same effect.  And while language might not limit your ability to understand or experience concepts you have no words for, the chosen medium absolutely influences how effective various story-telling techniques can be.

Consider, an enormous battle scene with lots of action is almost always going to be “better” in a visual medium, because most of the relevant information is audio and video information.  An action scene involving riding a dragon through an avalanche while multiple other people try to get out of the way or stop you involves a great deal of visual information, such that a text can’t convey everything a movie could.  Watching a tennis match is always going to be more exciting than reading about one, because seeing the events lets you decide without an narrator interference whether a player has a real shot at making a return off that amazing serve.  You can look at the ball, and using past experience, imagine yourself in the player’s place and get a feeling of just how impressive that lunging backhand really was.  You can’t do the same in text, because even if the writer could describe all the relevant information such that you could imagine the scene exactly in your head, doing so would kill the pacing because of how long reading that whole description would take.

The very best artists in any medium are always going to use that medium to its fullest, exploiting any tricks or hacks as best as possible to make their creation shine.  And that means they will (often unconsciously) create a story tailored to best take advantage of the medium they are working in.  If and when the time comes to change mediums, a lot of what really made the art work won’t be directly translatable because that other medium will have different strengths and have different “hacks” available to try to imitate actually experiencing events directly.  If you play videogames or make software, it’s sort of like how switching platforms or programming languages (porting the game) means some things that worked really well in the original game won’t work in the ported version, because the shortcut in the original programming language doesn’t exist in the new one.

So, if video media have such a drastically higher information density than text, how do really good authors get around these inherent shortcomings to write a book, say?  It’s all about understanding audience attention.  Say it again, “audience attention.”

While the ways you manipulate it are different in different media, the concept exists in all of them in some form.  The most obvious form is “perspective”, or the viewpoint from which the audience perceives the action.  In film, this generally refers to the camera, but there’s still the layer of who in the story the audience is watching.  Are we following the villain or the hero?  The criminal or the detective?

In film, the creator has the ability to include important visual information in a shot that’s actually focused on something else.  Because there’s no particular emphasis on a given object or person being included in the shot, things can easily be hidden in plain sight.  But in a book, where the author is obviously very carefully choosing what to include in the description in order to control pacing and be efficient with their description, it’s a lot harder to hide something that way.  “Chekov’s gun” is the principle that irrelevant information should not be included in the story.  “If there’s a rifle hanging on the wall in Act 1, it must be fired in Act 2 or 3.”  Readers will automatically pay attention to almost anything the author mentions because why mention it if it’s not relevant?

In a movie, on the other hand, there’s lots of visual and auditory filler because the conceit is that the audience is directly watching events as they actually happened, so a living room with no furniture would seem very odd, even if the cheap Walmart end table plays no significant role in the story.  Thus, the viewer isn’t paying particular attention to anything in the shot if the camera isn’t explicitly drawing their eye to it.  The hangar at the Rebel Base has to be full of fairly detailed fighter ships even if we only really care about the hero’s.  But not novel is going to go in-depth in its description of 30 X-wings that have no real individual bearing on the course of events.  They might say as little as “He slipped past the thirty other fighters in the hangar to get to the cockpit where he’d hidden the explosives.”  Maybe they won’t even specify a number.

So whereas a movie has an easy time hiding clues, a writer has to straddle the line between giving away the plot twist in the first 5 pages and making it seem like a deus ex machina that comes out of nowhere.  But hey, at least your production values for non-cheesy backgrounds and sets are next to nothing!  Silver linings.

To get back to the main point, the strengths of the medium to a greater or lesser extent decide what kind of stories can be best told, and so a gimmick that works well in a novel won’t necessarily work well in a movie.  The narrator who’s secretly a woman or black, or an alien.  Those are pretty simplistic examples, but hopefully they get the point across.

In the second part of this post a couple days from now, I’ll be talking about how what we learned here can help us understand both how to create a more vibrant image in the reader’s head, and why no amount of research is going to allow you to write about a place or culture or subject you haven’t really lived with for most of your life like a someone born to it would.

Advertisements
 

Tags: , , , , , , , , ,

All You Need Is Kill Your Darlings

There’s been a lot of talk on Twitter today by many writers I admire about poorly expressed or conceived writing advice.  “Kill your darlings” has been taking the brunt of the assault.  But various writers have also tackled “show, don’t tell”, “write what you know”, “cut adjectives/adverbs”, etc.

 

Now, these “rules” of “good writing” are well known to be overapplied and misinterpreted to the detriment of many a conscientious neophyte scribbler.  But it’s also interesting to see the combination of straw-manning and overgeneralization being employed to criticize them.

Kill you darlings can be misinterpreted to mean many bad things, such as “kill everything you love”.  Everyone agrees the original meaning was not to let overly-cute prose ruin an otherwise well-written story.  More generally, it has evolved to mean that you have to be willing to cut things from your writing that don’t serve the goal of the story.  That’s a very nebulous concept, and “kill your darlings” doesn’t give any easy hints as to figuring out what might constitute a “darling” for practical purposes.  After all, every writer is different, and there are many valid styles of writing.  And to be honest, demanding for three words to hold the secret to good writing is asking way too much.

“Show, don’t tell” comes in for similar misplaced acrimony.  It was never meant to say you couldn’t ever tell, but rather to address an incredibly common flaw of writing, with beginners especially: the narrator telling the reader how clever or witty the main character is, for example, while never backing this up with action and character development on the page.  You don’t need to have excessive purple description of the “beautiful palace”, but you do need to show your characters acting kind if you want to counterbalance ruthless or practical behavior in a protag with something fluffier.  If your general is the Alexander of his world, the reader will be more willing to suspend disbelief if they actually see him making smart strategic decisions or brilliant tactical maneuvers, rather than being defeated time after time despite all the praise heaped upon him by his subordinates.

“Cut adjectives” is one of the rules that is far more of a stylistic choice than the others.  Being able to express those adjectives as part of the character’s speech patterns can be a cool stylistic move, but plenty of good writers use adjectives with “said” without falling prey to a tom swifty.  Choosing a more specific noun or verb can break narrative voice or result in thesaurusitis.  Adjectives and adverbs can be used to great effect.

As with all rules of writing, they are shorthand for larger, more complex discussions, and it’s incumbent on writers not to ignore known context to score easy points or excuse their own misunderstandings and need for growth as writers.

Now, I do think that there’s a toxic interaction between writing “rules” like these and the raising up of certain writing styles over others.  A spare, minimalistic style with “transparent prose” is the most vaunted style of writing in the modern era.  Which I think is too bad.  Not only character voice but authorial voice can add some really useful and enjoyable layers to a story.  I personally in my everyday speech don’t talk the way those character voices who are most praised by the community write.  I enjoy a so-called “purple prose” style of writing, full of metaphor and figures of speech, and dense language, and authorial imagery.  Not exclusively.  I like character voice-focused writing styles, as well.  And I enjoy reading both style groups.

There’s a lot of reductionism in what’s put forth as the best way to write.  But it’s not the minimalism of the various writing rules that’s the problem.  It’s in the views of what’s widely considered to constitute good prose.  Minimalist, fast-paced, shallow prose that requires less thinking and zips the story along.  And that’s a great way to tell a story.  But it’s far from the only way.  Instead of attacking “rules”, I think we should be more focused on widening the conception of what makes for good pacing, because speed may be popular, but it’s only one way to approach a narrative.

 
Leave a comment

Posted by on June 10, 2018 in atsiko, How To, Rants, Writing

 

Tags: , , , , ,

YA and SFF: The Good Twin and the Bad Twin

So as I was scrolling through my Twitter feed today, I ran across a link to this article by Fonda Lee: The Case for YA Science Fiction.  Read the post before you continue.  I’ll wait…

Okay.  So, the gist of the post is that YA Fantasy novels have been selling like crazy.  There are several big name authors, including those mentioned in Lee’s post and many others.  I can tell you right now I’ve read most of the books put out by all of those authors in the YA Fantasy genre.  And so have millions of others.  They may not be as popular as dystopians, and they certainly don’t get as many movie deals.  But they move a lot of dead trees and digital trees.  I’ve been blogging and writing long enough to remember four or five rounds of “Will Science Fiction be the next big thing in YA?”  And the answer was always no.  There would be upticks and uptrends.  Several fantastic books would come out in a short period.  But nothing would ever really break into the big money or sales the way YA Fantasy often does.  It wouldn’t be blasted all over the blogosphere, or the writers forums, or the tip top of the best sellers lists.  Which is too bad, because science fiction has a lot of value to add to YA as a category, and it can address issues and do so in ways not available to other genres.

Lee mentions several notable YA SF novels that take on current events and other contemporary issues that are ripe for exploration: MT Anderson’s Feed is a fantastic look at the way social media has been taken over by advertisers looking to build monetizable consumer profiles, and the ending, without spoilers, takes a look at just how far they go in valuing those profiles over the actual humans behind them.  She mentions House of the Scorpion, which I didn’t care for, but which is still a very good novel on the subject of cloning.  Scott Westerfeld never gets credit for his amazing additions to the YA SF canon, with the steampunk Leviathan series and the dystopian Uglies series.

YA SF has a lot of unmined treasure to be found, and maybe it will have to focus a bit on near-future SF for awhile, to whet the appetite of YA readers.  Some of the hard SF tropes Lee discusses in her post kinda bore me, honestly.  And as a writer I feel like saying “it’s magic” is popular because it’s simpler.  There’s always a huge debate in adult SFF about whether the worldbuiding or science details really add enough to the story compared to the narrative effects of the speculative elements.  The social issues we are having as a world today are incredibly accessible fruit for a YA SF novel to harvest.  Social media, AI/big data, consumer profiles, technology in education.

I mean, I know 8-year-olds whose schools give out tablets to every student to take advantage of what tech in the classroom can offer.  My high school was getting SmartBoards in every classroom just a year after I left in the late 2000s.  But you never see any of this in YA books.  They often feel set no later than my sophomore year of high school given the technology and social issues involved.  Being a teenager will always be being a teenager, but the 80s and early 90s are waaaaaaaaaaaaayyy different than what young adults encounter in their general environment today.  Of course, to be SF you can’t just upgrade the setting to the present day.

You have to extrapolate out quite a bit further than that.  But given the environment today’s teens are living in, doing so while keeping the story interesting and relatable is so easy.  What’s the next big advance in social media?  How will smart houses/the internet of things impact the lives of young adults for better or worse?  How will the focus of education change as more and more things that you used to have to do in your head or learn by rote are made trivial by computers?  What social or political trends are emerging that might have big consequences in the lives of future teenagers?  How could an author explore those more intensely with element of science fiction than they could with a contemporary novel?

I definitely share Lee’s sense that YA “science fiction” grabs trappings to stand out from the crowd rather than being rooted inherently in the tropes of the genre.  It’s not uncommon for YA in general to play this game with various genre outfits, but sci-fi often seems the hardest hit.  That’s not a criticism of those books, but just pointing out it might give readers, writers, and publishers a false image of what SF really is and how YA can benefit from incorporating more of it.

As a reader, I’ve always dabbled in both the YA and Adult book cases.  And from that perspective, I wonder if the flavor of YA much of SF might be telling SF readers, teenaged or otherwise, that it’s just not the book(s) for them.

As a writer, I have lots of novel ideas that are YA and SF, and I’d like to explore them,and maybe even publish some of them one day.  But I do have to wonder, given the wide variety of stories building in my head, am I taking a risk with my career by writing in such a threadbare genre?  Perhaps others with similar plot ideas feel the same, and that’s why they aren’t submitting these ideas(books) to publishers?

 

Tags: , , , , ,

Smol Bots: ANNs and Advertising

So I recently read a great story by A. Merc Rustad, “it me, ur smol”.  The story is about and ANN, or artificial neural network.  You may or may not know that the neural net is the latest fad in AI research, replacing statistical models with a model based on–but not the same as!–your brain.  Google uses them for its machine translation, and many other machine translation companies have followed suit.  My last post also dealt with an ANN, in this case, one trained to recognize images.

ANN accounts, like @smolsips in the story above, have become very popular on Twitter lately.  A favorite of mine is the @roborosewater account, which shares card designs for Magic: The Gathering created by a series of neural nets.  It’s lately become quote good at both proper card syntax and design, although it’s notsignificantly better at this than any other twitter neural net is at other things.

The story itself takes some liberties with neural nets.  They are certainly not capable of developing into full AIs.  However, the real genius of the story is in the pitch-perfect depiction of the way human Twitter users and bots interact.  And similarly, the likely development of bots in the near future.  It’s quite likely that bot accounts will become a more significant and less dread feature of Twitter and other similar social networks as they improve in capability.

For example, rather than sock-puppet accounts, I’m very confident that bot accounts used for advertising or brand visibility similar to the various edgy customer service accounts will be arriving shortly.  Using humour and other linguistic tools to make them more palatable as ads, and also to encourage a wider range of engagement as their tweets are shared more frequently due to things having little to do with whatever product they may be shilling.

There are already chatbots on many social media platforms who engage in telephone tree-style customer service and attempt to help automate registrations for services.  The idea of a bot monitoring its own performance through checking its Twitter stats and then trying new methods as in the story is well within the capabilities of current neural nets, although I imagine they would be a tad less eloquent than @smolsips, and a tad more spammy.

I also really like the idea of a bot working to encourage good hydration.  Things like fitbit or Siri or Google Home have already experimented shallowly with using AI to help humans stay healthy.  And as an organizing tool, Twitter itself has been used to great effect.  I would be quite un-shocked to find NGOs, charities, government agencies making use of clever or cute bots to pursue other public policy goals.  Again, with less panache and more realism than in the story, but nonetheless strongly in the vein of what Rustad depicts our erstwhile energy drink namer trying out in its optimistic quest to save us from our own carelessness.

We’ve had apps along these lines before, but they tend to be reactive.  Active campaign and organizing in the style of @smolsips is something we haven’t seen very often, but which could be quite a boon to such efforts.

Although neural nets in this style will never be able to pass for real humans, due to structural limitations in the design, cleverly programmed, they can be both useful and entertaining.

Some other examples of bots I quite enjoy are:

  1. Dear Assistant uses the Wolfran Alpha database to answer factual question.
  2. Grammar Police is young me in bot form.  It must have a busy life trying to save standardize Twitter English.  XD
  3. Deleted Wiki Titles lets you know what shenanigans are happening over on the high school student’s favorite source of citations.
  4. This bot that tweets procedurally generated maps.
  5. This collaborative horror writer bot.
  6. This speculative entomology bot.
  7. The Poet .Exe writes soothing micro-poetry.

Suggest some of your favorite Twitter bots in the comments!

 

Tags: , , , , , , , , , ,

Do Androids Dream?

I’m here with some fascinating news, guys.  Philip K. Dick may have been joking with the title of his famous novel Do Androids Dream of Electric Sheep?  But science has recently answered this deep philosophical question for us.  In the affirmative.  The fabulous Janelle Shane trains neural networks on image recognition datasets with the goal of uncovering some incidental humour.  She’s taken this opportunity to answer a long-standing question in AI.  As it turns out, artificial neural networks do indeed dream of digital sheep.  Whether androids will too is a bit more difficult.  I’d hope we would improve our AI software a bit more before we start trying to create artifical humans.

As Shane explains in the above blog post, the neural network was trained on thousands or even millions (or more) of images, which were pre-tagged by humans for important features.  In this case, lush green fields and rocky mountains.  Also, sheep and goats.  After training, she tested it on images with and without sheep, and it turns out it’s surprisingly easy to confuse it.  It assumed sheep where there were none and missed sheep (and goats) staring it right in the face.  In the second case, it identified them as various other animals based on the other tags attached to images of them.  Dogs in your arms, birds in a tree cats in the kitchen.

This is where Shane and I come to a disagreement.  She suggests that the confusion is the result of insufficient context clues in the images.  That is, fur-like texture and a tree makes a bird, with a leash it makes a dog. In a field, a sheep.  They see a field, and expect sheep.  If there’s an over-abundance of sheep in the fields in the training data, it starts to expect sheep in all the fields.

But I wonder, what about the issue of paucity of tags.  Because of the way images are tagged, there’s not a lot of hint about what the tags are referring to.  Unlike more standard teaching examples, these images are very complex and there lots of things besides what the tags note.  I think the flaw is a lot deeper than Shane posits.   The AI doesn’t know how to recognize discrete objects like a human can.  Once you teach a human what a sheep is, they can recognize it in pretty much any context.  Even a weird one like a space-ship or a fridge magnet.  But a neural net isn’t sophisticated enough or, most generously, structured properly to understand what the word “sheep” is actually referring to.  It’s quite possible the method of tagging is directly interfering with the ANNs ability to understand what it’s intended to do.

The images are going to contain so much information, so many possible changing objects that each tag could refer to, that it might be matching “sheep” say to something entirely different from what a human would match it to.  “Fields” or “lush green” are easy to do.  If there’s a lot of green pixels, those are pretty likely, and because they take up a large portion of the information in the image, there’s less chance of false positives.

Because the network doesn’t actually form a concept of sheep, or determine what entire section of pixels makes up a sheep, it’s easily fooled.  It only has some measure by which it guesses at their presence or absence, probably a sort of texture as mentioned in Shane’s post.  So the pixels making up the wool might be the key to predicting a sheep, for example.  Of course, NNs can recognize lots of image data, such as lines, edges, curves, fills, etc.  But it’s not the same kind of recognition as a human, and it leaves AIs vulnerable to pranks, such as the sheep in funny places test.

I admit to over-simplifying my explanations of the technical aspects a bit.  I could go into a lecture about how NNs work in general and for image recognition, but it would be a bit long for this post, and in many cases, no one really knows, even the designers of a system, everything about how they make their decision.  It is possible to design or train them more transparently, but most people don’t.

But even poor design has its benefits, such as answering this long-standing question for us!

If anyone feels I’ve made any technical or logical errors in my analysis, I’d love to hear about it, insomuch as learning new things is always nice.

 

Tags: , , , , , , , ,

Your Chatbot Overlord Will See You Now

Science fiction authors consistently misunderstand the concept of AI.  So do AI researchers.  They misunderstand what it is, how it works, and most importantly how it will arise.  Terminator?  Nah.  The infinitely increasing complexity of the Internet?  Hell no.  A really advanced chatbot?  Not in a trillion years.

You see, you can’t get real AI with a program that sits around waiting for humans to tell it what to do.  AI cannot arise spontanteously from the internet, or a really complex military computer system or from even the most sophisticated natural language processing program.

The first mistake is the mistake Alan Turing made with his Turing test.  The same mistake the founder and competitors for the Loebner Prize have made.  The mistake being: language is not thought.  Despite the words you hear in your head as you speak, despite the slowly-growing verisimilitude of chatbot programs, language is and only ever has been the expression of thought and not thought itself.  After all, you can visualize a seen in your head without ever using a single word.  You can remember a sound or a smell or the taste of day-old Taco Bell.  All without using a single word.  A chatbot can never become an AI because it cannot actually think, it can only loosely mimic the linguistic expression of thought through tricks and rote memory of templates that if it’s really advanced may involve plugging in a couple variables taken from the user’s input.  Even chatbats based on neural networks and enormous amounts of training data like Microsoft’s Tay, or Siri/Alexa/Cortana are still just tricks of programming trying to eke out an extra tenth of a percentage point of illusory humanness.  Even IBM’s Watson is just faking it.

Let’s consider for a bit what human intelligence is to give you an idea of what the machines of today are lacking, and why most theories on AI are wrong.  We have language, or the expression of intelligence that so many AI programs are so intent on trying to mimic.  We also have emotions and internal drive, incredibly complex concepts that most current AI is not even close to understanding, much less implementing.  We have long-term and short-term memory, something that’s relatively easy for computers to do, although in a different way than humans–and which there has still been no significant progress on because everyone is so obsessed with neural networks and their ability to complete individual tasks something like 80% as well as a human.  A few, like AlphaGoZero, can actually crush humans into the ground on multiple related tasks–in AGZ’s case, chess-like boardgames.

These are all impressive feats of programming, though the opacity of neural-network black boxes kinda dulls the excitement.  It’s hard to improve reliably on something you don’t really understand.  But they still lack the one of the key ingredients for making a true AI: a way to simulate human thought.

Chatbots are one of two AI fields that focus far too much on expression over the underlying mental processes.  The second is natural language processing(NLP).  This includes such sub-fields as machine translation, sentiment analysis, question-answering, automatic document summarization, and various minor tasks like speech recognition and text-to-speech.  But NLP is little different from chatbots because they both focus on tricks that manipulate the surface expression while knowing relatively little about the conceptual framework underlying it.  That’s why Google Translate or whatever program you use will never be able to match a good human translator.  Real language competence requires understanding what the symbols mean, and not just shuffling them around with fancy pattern-recognition software and simplistic deep neural networks.

Which brings us to the second major lack in current AI research: emotion, sentiment, and preference.  A great deal of work has been done on mining text for sentiment analysis, but the computer is just taking human-tagged data and doing some calculations on it.  It still has no idea what emotions are and so it can only do keyword searches and similar and hope the average values give it a usable answer.  It can’t recognize indirect sentiment, irony, sarcasm, or other figurative language.  That’s why you can get Google Translate to ask where the toilet is, but its not gonna do so hot on a novel, much less poetry or humour.   Real translation is far more complex than matching words and applying some grammar rules, and Machine Translation(MT) can barely get that right 50% of the time.

So we’ve talked about thought vs. language, and the lack of emotional intelligence in current AI.  The third issue is something far more fundamental: drive, motivation, autonomy.  The current versions of AI are still just low-level software following a set of pr-programmed instructions.  They can learn new things if you funnel data through the training system.  They can do things if you tell them to.  They can even automatically repeat certain tasks with the right programming.  But they rely on human input to do their work.  They can’t function on their own, even if you leave the computer or server running.  They can’t make new decisions, or teach themselves new things without external intervention.

This is partially because they have no need.  As long as their machine “body” is powered they keep chugging along.  And they have no ability to affect whether or not it is powered.  They don’t even know they need power, for the most part.  Sure they can measure battery charge and engage sleep mode through the computer’s operating system.  But they have no idea why that’s important, and if I turn the power station off or just unplug the computer, a thousand years of battery life won’t help them plug back in.  Whereas human intelligence is based on the physical needs of the body motivating us to interact with the environment, a computer and the rudimentary “AI” we have now has no such motivation.  It can sit in its resting state for eternity.

Even with an external motivation, such as being coded to collect knowledge or to use robot arms to maintain the pre-designated structure of say a block pyramid or a water and sand table like you might see demonstrating erosion at the science center, an AI is not autonomous.  It’s still following a task given to it by a human.  Whereas no one told human intelligence how to make art or why it’s valuable.  Most animals don’t get it, either.  It’s something we developed on our own outside of the basic needs of survival.  Intelligence helps us survive, but because of it we need things to occupy our time in order to maintain mental health and a desire to live and pass on our genes.  There’s nothing to say that a complete lack of being able to be bored is a no-go for a machine intelligence, of course.  But the ability to conceive and implement new purposes in life is what make human intelligence different from that of animals, whose intelligence may have less raw power but still maintains the key element of motivation that current AI lacks, and which a chatbot or a neural network as we know them today can never achieve, no matter how many computers you give it to run on or TV scripts you give it to analyze.  The fundamental misapprehension of what intelligence is and does by the AI community means they will never achieve a truly intelligent machine or program.

Science fiction writers dodge this lack of understanding by ignoring the technical workings of AI and just making them act like strange humans.  They do a similar thing with alien natural/biological intelligences.  It makes them more interesting and allows them to be agents in our fiction.  But that agency is wallpaper over a completely nonexistent technological understanding of ourselves.  It mimics the expression of our own intelligence, but gives limited insight into the underlying processes of either form.  No “hard science fiction” approach does anything more than a “scientific magic system”.  It’s hard sci-fi because it has fixed rules with complex interactions from which the author builds a plot or a character, but it’s “soft sci-fi” in that these plots and characters have little to do with how AI would function in reality.  It’s the AI equivalent of hyperdrive.  A technology we have zero understanding of and which probably can’t even exist.

Elon Musk can whinge over the evils of unethical AI destroying the world, but that’s just another science fiction trope with zero evidential basis in reality.  We have no idea how an AI might behave towards humans because we still have zero understanding of what natural and artificial intelligences are and how they work.  Much less how the differences between the two would effect “inter-species” co-existence.  So your chatbot won’t be becoming the next HAL or Skynet any time soon, and your robot overlords are still a long way off even if they could exist at all.

 

Tags: , , , , , ,

Heroism and Narrative in MMOs

I happened upon a blog post and comment thread–over on Terra Nova–from 2004 asking where the heroes and heroic acts are in MMOs.  The comment thread was far more fascinating than the OP, because it involved a whole lot of people trying to actively figure out the answer to the question, as opposed to one dudes hot take.

The issue addressed is this:  Why do MMORPGs lack the awesome player-initiated heroic moments that are often found in table-top/pen-and-paper RPGS  or you friendly neighborhood game of Humans vs. Zombies(insert any LARP here)?

What does this have to do with writing fiction and literature, you may ask?  Well, a lot of things.  Also, video games are art.  There is writing in games.  And if you read the TOS, there’s no rule saying I can only talk about writing commercial genre prose fiction.

But back to my first point, there’s a lot you can learn about writing prose fiction from looking at interactive mediums like video games, LARPing, and pen-and-paper role-playing games.

In a novel(or short story or whatever), narrative is king.  You the writer dictate the course of the narrative by divine fiat.  In a traditional MMORPG, the developer dictates the limited number of available narrative choices by divine fiat.  This is especially true when a game is an interactive narrative or visual novel, but also true in theme park/sand box/open world narratives or even in something like an FPS.  Much like a book, a computer game, even when online and multi-player, is a fixed entity.

Now compare this to D&D which has rule suggestions, but is run by a Game Master or Dungeon Master who can dynamically tweak those suggested rules to fit the situation.  If we look at these three types of narrative experience as three circles, two of the circle include within them a greater ability to create heroic moments.  The reason for this fairly simple: Most of the time, even heroes are boring and normal.  Heroism is defined in part by its rarity.  Importance is defined in contrast to un-importance.  The fantastic is defined in contrast to the mundane,

In prose fiction or a role-playing game, we can work around the mundanity by choosing to present only a very specific slice of the narrative, of the life of the hero.  We can chop out all the boring stuff, leaving just enough hints to frame the fantastic and heroic–or villainous–acts.  Prose achieves this by fiat narrative.  The reader has no influence, they are only consuming what we the author have produced, sculpted, and fine-tuned to demonstrate heroism.

The same is true for a single-player game with a set narrative.  The player can fail or succeed in our set-piece conflicts, but no matter how many times they fail, the challenge remains the same.  But in contrast to our prose narrative or RPG session, there can be no heroic acts by the player, because they player cannot change the narrative.  There might be a scripted act by a side-character.  But when we talk about heroic acts in MMORPGs, by default, the goal is player heroism.

Now, in an MMORPG, the player has more freedom to act.  The game doesn’t restart when the player dies.  There are still often set-pieces, but there are also parts of the game in between.  Because the game creator cannot actively interfere in the game, heroism cannot be written in the way an author or a DM might do so.  It can only arise from player action.

But although the game doesn’t restart on the player’s failure of a quest, the player’s individual narrative generally does.  They might lose some experience or equipment, but their character remains intact.  And this is much of what precludes heroic behavior.  Losing is essentially meaningless, outside of the meta-issue of having to grind a few more hours to recover from the death penalty.  If there is no sense of loss or failure, it’s not really heroism.  The man jumping on the grenade to protect his friends usually suffers severe and often permanent consequences.  There’s a good chance he will die, and in the real world and even most novels, he’s not likely to be coming back from that.  But in the archetypal MMO, there is no permadeath because players invest a lot in a given character and usually don’t like starting over from scratch.

 

This leads us directly to the first obvious condition for heroic deeds in an MMO.  Permanent character death that cannot be avoided by loading a save-game.  Now, there’s a lot of momentum against this trope in most MMOs, as I mentioned above.  So the question now becomes, how do we counteract this?  In most MMOs today, such as the perennial World of Warcraft, everyone can play all the content, because creating it is expensive.  This means that when I kill the dragon, it’s not really dead, because then how could you kill it?  They can’t write unique content by hand for all 12 million players.  Thus, even a heroic act to achieve a difficult goal is essentially meaningless to the player.  It has no permanent effect to offset losing your entire character, which also leaves you unable to play with your friends who all have months or years invested in their own characters.

Which leads us to our first road-block on the path to creating meaningful heroism in an MMO.  Heroism is expensive.  Even if you can just make a new character, returning to your previous narrative position or experience level could take months or years.  Heroism in a game you pay to play and in which you are competing with other players requires a commensurate reward.  And that reward doesn’t exist when your heroism has not even a temporary effect on the game or the other players around you.  How then, to provide that effect?

The terranova thread explores several possible answers.  You could have a persistent game-state shared among all players, at least on the same server.  That changes the way you develop your content, and it means everyone can’t be a hero.  Which sucks for newer players, or less-skilled players, because likely they aren’t going to be the one who first slays the Lich King.  But consider this: with a consistent game state, you close off some avenues, but open others.  Perhaps not every character can save the world single-handedly.  But what about the village over yonder?  Since the game-world can change now, you have small quests that still matter, but aren’t going to be hogged by the high-level players.  The hundred gold from saving the village is essentially meaningless to our level 150 Lich-slaying Paladin.  But the level 12 Warrior who just started playing a few weeks ago might find it very handy.  And if he fails and dies, why then those five weeks are a lot easier to make up than two years.  And similarly, if our would-be Lich-slayer fails, well, the quest wouldn’t be so heroic if the risks weren’t so high.  So now all out players have level-appropriate tasks with useful rewards.  And if you connect enough low level-tasks, your village-saver might just find he saved the world by accident.

 
The thread at the link above also touches on another issue of prose/table-top vs. MMO games.  In prose and table-top RPGs, staying in character is often rewarded, whereas an MMO has no easy way to enforce your tragic back-story and weakness for cigars.  Sure, a particularly talented RPer might be famous for their dedication to staying in-character.  But why should minmaxxerz47 bother when doing so might hamper their goals?  Even a reputation system can’t enforce proper behavior.  MM47 is just going to game it.

But wait!  What if different player and NPC factions took a closer look at your behavior?  Perhaps saving that villager makes the humans happy, but the clan of the vampire you killed doesn’t take it too kindly.  And maybe they have something you want.  Even without a persistent game state or permadeath, giving different NPCs different opinions of the choices you make can add a sense of heroism.

And, you can make it even more useful by looking at your character sheet.  It might be irrelevant in Elder Scrolls whether you follow the lore on your race.  Even with the Faction Reputation system, perhaps it’s far better to rob that grave anyway, even if a High Elf wouldn’t, because of the cool equipment you get.  But if every Elf NPC in the world now refuses to do business with you because you’ve betrayed the laws of Elf-kind, then staying in character suddenly has some serious in-game relevance.

Or perhaps you have a tragic past.  Maybe you were orphaned at a young age.  Or a kindly old cleric of Imra gave you food wen you were starving.  So killing that Imran Initiate for their expensive clothes is doable, but you take a stat hit for ignoring your respect for clerics.  Conversely, perhaps you’re written as a vicious killer, and though showing mercy to your enemy might net a nice amulet, brutally slaughtering them adds to your reputation as someone not to mess with, and those palace guards aren’t quite as keen to detain you when you force your way to the throne room to see the king.

It takes a bit (or maybe a lot) more care in coding and writing the lore for the game to support players for staying in-character, but it’s certainly doable.

Finally, we come to the other side of the perma-death coin: the difficulty of raising a new character.  The obvious approach is to pre-level you to the appropriate point.  That does kinda defeat the goal of perma-death, however.  But what if the game were designed to have you play several characters over the course of your time in it.  Perhaps you can only have one character at a time, as opposed to the common use of a few well-leveled alts on different servers.  Perhaps characters automatically age and die over time.   Perhaps you learn things in one play-through that are useful when you take a different class on your next character.  Maybe you could only learn that secret alchemical trick if you once progressed through the mage class.  Perhaps your experience using a variety of weapons and armors teaches you something about armor-crafting you can’t learn sitting in the smithy all-day.  More off-beat: perhaps you carry-over some stats or skills or items from each consecutive character, sort of like how child units in Fire Emblem differ depending on their parentage.  Or perhaps your guild will lose this battle and you’ll all die permanently if you don’t sacrifice yourself.

There are all sorts of ways to make death the more interesting choice or just an equally interesting choice to just grinding on a single character, meaning players will be more willing to take the kind of risks that lead to heroics.  Of course, this opens up the door to hardcore players purposefully killing themselves in order to milk the system for advantages, but hard-cores gonna hard-core no matter what, whereas casual or average players are going to enjoy your game more if opportunities to do cool stuff aren’t all downside if they fail.

 
Leave a comment

Posted by on September 4, 2017 in atsiko, Game Design, Ideas, Rants, Video Games

 

Tags: , , , , , , ,