RSS

Category Archives: technology

YA and SFF: The Good Twin and the Bad Twin

So as I was scrolling through my Twitter feed today, I ran across a link to this article by Fonda Lee: The Case for YA Science Fiction.  Read the post before you continue.  I’ll wait…

Okay.  So, the gist of the post is that YA Fantasy novels have been selling like crazy.  There are several big name authors, including those mentioned in Lee’s post and many others.  I can tell you right now I’ve read most of the books put out by all of those authors in the YA Fantasy genre.  And so have millions of others.  They may not be as popular as dystopians, and they certainly don’t get as many movie deals.  But they move a lot of dead trees and digital trees.  I’ve been blogging and writing long enough to remember four or five rounds of “Will Science Fiction be the next big thing in YA?”  And the answer was always no.  There would be upticks and uptrends.  Several fantastic books would come out in a short period.  But nothing would ever really break into the big money or sales the way YA Fantasy often does.  It wouldn’t be blasted all over the blogosphere, or the writers forums, or the tip top of the best sellers lists.  Which is too bad, because science fiction has a lot of value to add to YA as a category, and it can address issues and do so in ways not available to other genres.

Lee mentions several notable YA SF novels that take on current events and other contemporary issues that are ripe for exploration: MT Anderson’s Feed is a fantastic look at the way social media has been taken over by advertisers looking to build monetizable consumer profiles, and the ending, without spoilers, takes a look at just how far they go in valuing those profiles over the actual humans behind them.  She mentions House of the Scorpion, which I didn’t care for, but which is still a very good novel on the subject of cloning.  Scott Westerfeld never gets credit for his amazing additions to the YA SF canon, with the steampunk Leviathan series and the dystopian Uglies series.

YA SF has a lot of unmined treasure to be found, and maybe it will have to focus a bit on near-future SF for awhile, to whet the appetite of YA readers.  Some of the hard SF tropes Lee discusses in her post kinda bore me, honestly.  And as a writer I feel like saying “it’s magic” is popular because it’s simpler.  There’s always a huge debate in adult SFF about whether the worldbuiding or science details really add enough to the story compared to the narrative effects of the speculative elements.  The social issues we are having as a world today are incredibly accessible fruit for a YA SF novel to harvest.  Social media, AI/big data, consumer profiles, technology in education.

I mean, I know 8-year-olds whose schools give out tablets to every student to take advantage of what tech in the classroom can offer.  My high school was getting SmartBoards in every classroom just a year after I left in the late 2000s.  But you never see any of this in YA books.  They often feel set no later than my sophomore year of high school given the technology and social issues involved.  Being a teenager will always be being a teenager, but the 80s and early 90s are waaaaaaaaaaaaayyy different than what young adults encounter in their general environment today.  Of course, to be SF you can’t just upgrade the setting to the present day.

You have to extrapolate out quite a bit further than that.  But given the environment today’s teens are living in, doing so while keeping the story interesting and relatable is so easy.  What’s the next big advance in social media?  How will smart houses/the internet of things impact the lives of young adults for better or worse?  How will the focus of education change as more and more things that you used to have to do in your head or learn by rote are made trivial by computers?  What social or political trends are emerging that might have big consequences in the lives of future teenagers?  How could an author explore those more intensely with element of science fiction than they could with a contemporary novel?

I definitely share Lee’s sense that YA “science fiction” grabs trappings to stand out from the crowd rather than being rooted inherently in the tropes of the genre.  It’s not uncommon for YA in general to play this game with various genre outfits, but sci-fi often seems the hardest hit.  That’s not a criticism of those books, but just pointing out it might give readers, writers, and publishers a false image of what SF really is and how YA can benefit from incorporating more of it.

As a reader, I’ve always dabbled in both the YA and Adult book cases.  And from that perspective, I wonder if the flavor of YA much of SF might be telling SF readers, teenaged or otherwise, that it’s just not the book(s) for them.

As a writer, I have lots of novel ideas that are YA and SF, and I’d like to explore them,and maybe even publish some of them one day.  But I do have to wonder, given the wide variety of stories building in my head, am I taking a risk with my career by writing in such a threadbare genre?  Perhaps others with similar plot ideas feel the same, and that’s why they aren’t submitting these ideas(books) to publishers?

Advertisements
 

Tags: , , , , ,

Smol Bots: ANNs and Advertising

So I recently read a great story by A. Merc Rustad, “it me, ur smol”.  The story is about and ANN, or artificial neural network.  You may or may not know that the neural net is the latest fad in AI research, replacing statistical models with a model based on–but not the same as!–your brain.  Google uses them for its machine translation, and many other machine translation companies have followed suit.  My last post also dealt with an ANN, in this case, one trained to recognize images.

ANN accounts, like @smolsips in the story above, have become very popular on Twitter lately.  A favorite of mine is the @roborosewater account, which shares card designs for Magic: The Gathering created by a series of neural nets.  It’s lately become quote good at both proper card syntax and design, although it’s notsignificantly better at this than any other twitter neural net is at other things.

The story itself takes some liberties with neural nets.  They are certainly not capable of developing into full AIs.  However, the real genius of the story is in the pitch-perfect depiction of the way human Twitter users and bots interact.  And similarly, the likely development of bots in the near future.  It’s quite likely that bot accounts will become a more significant and less dread feature of Twitter and other similar social networks as they improve in capability.

For example, rather than sock-puppet accounts, I’m very confident that bot accounts used for advertising or brand visibility similar to the various edgy customer service accounts will be arriving shortly.  Using humour and other linguistic tools to make them more palatable as ads, and also to encourage a wider range of engagement as their tweets are shared more frequently due to things having little to do with whatever product they may be shilling.

There are already chatbots on many social media platforms who engage in telephone tree-style customer service and attempt to help automate registrations for services.  The idea of a bot monitoring its own performance through checking its Twitter stats and then trying new methods as in the story is well within the capabilities of current neural nets, although I imagine they would be a tad less eloquent than @smolsips, and a tad more spammy.

I also really like the idea of a bot working to encourage good hydration.  Things like fitbit or Siri or Google Home have already experimented shallowly with using AI to help humans stay healthy.  And as an organizing tool, Twitter itself has been used to great effect.  I would be quite un-shocked to find NGOs, charities, government agencies making use of clever or cute bots to pursue other public policy goals.  Again, with less panache and more realism than in the story, but nonetheless strongly in the vein of what Rustad depicts our erstwhile energy drink namer trying out in its optimistic quest to save us from our own carelessness.

We’ve had apps along these lines before, but they tend to be reactive.  Active campaign and organizing in the style of @smolsips is something we haven’t seen very often, but which could be quite a boon to such efforts.

Although neural nets in this style will never be able to pass for real humans, due to structural limitations in the design, cleverly programmed, they can be both useful and entertaining.

Some other examples of bots I quite enjoy are:

  1. Dear Assistant uses the Wolfran Alpha database to answer factual question.
  2. Grammar Police is young me in bot form.  It must have a busy life trying to save standardize Twitter English.  XD
  3. Deleted Wiki Titles lets you know what shenanigans are happening over on the high school student’s favorite source of citations.
  4. This bot that tweets procedurally generated maps.
  5. This collaborative horror writer bot.
  6. This speculative entomology bot.
  7. The Poet .Exe writes soothing micro-poetry.

Suggest some of your favorite Twitter bots in the comments!

 

Tags: , , , , , , , , , ,

Do Androids Dream?

I’m here with some fascinating news, guys.  Philip K. Dick may have been joking with the title of his famous novel Do Androids Dream of Electric Sheep?  But science has recently answered this deep philosophical question for us.  In the affirmative.  The fabulous Janelle Shane trains neural networks on image recognition datasets with the goal of uncovering some incidental humour.  She’s taken this opportunity to answer a long-standing question in AI.  As it turns out, artificial neural networks do indeed dream of digital sheep.  Whether androids will too is a bit more difficult.  I’d hope we would improve our AI software a bit more before we start trying to create artifical humans.

As Shane explains in the above blog post, the neural network was trained on thousands or even millions (or more) of images, which were pre-tagged by humans for important features.  In this case, lush green fields and rocky mountains.  Also, sheep and goats.  After training, she tested it on images with and without sheep, and it turns out it’s surprisingly easy to confuse it.  It assumed sheep where there were none and missed sheep (and goats) staring it right in the face.  In the second case, it identified them as various other animals based on the other tags attached to images of them.  Dogs in your arms, birds in a tree cats in the kitchen.

This is where Shane and I come to a disagreement.  She suggests that the confusion is the result of insufficient context clues in the images.  That is, fur-like texture and a tree makes a bird, with a leash it makes a dog. In a field, a sheep.  They see a field, and expect sheep.  If there’s an over-abundance of sheep in the fields in the training data, it starts to expect sheep in all the fields.

But I wonder, what about the issue of paucity of tags.  Because of the way images are tagged, there’s not a lot of hint about what the tags are referring to.  Unlike more standard teaching examples, these images are very complex and there lots of things besides what the tags note.  I think the flaw is a lot deeper than Shane posits.   The AI doesn’t know how to recognize discrete objects like a human can.  Once you teach a human what a sheep is, they can recognize it in pretty much any context.  Even a weird one like a space-ship or a fridge magnet.  But a neural net isn’t sophisticated enough or, most generously, structured properly to understand what the word “sheep” is actually referring to.  It’s quite possible the method of tagging is directly interfering with the ANNs ability to understand what it’s intended to do.

The images are going to contain so much information, so many possible changing objects that each tag could refer to, that it might be matching “sheep” say to something entirely different from what a human would match it to.  “Fields” or “lush green” are easy to do.  If there’s a lot of green pixels, those are pretty likely, and because they take up a large portion of the information in the image, there’s less chance of false positives.

Because the network doesn’t actually form a concept of sheep, or determine what entire section of pixels makes up a sheep, it’s easily fooled.  It only has some measure by which it guesses at their presence or absence, probably a sort of texture as mentioned in Shane’s post.  So the pixels making up the wool might be the key to predicting a sheep, for example.  Of course, NNs can recognize lots of image data, such as lines, edges, curves, fills, etc.  But it’s not the same kind of recognition as a human, and it leaves AIs vulnerable to pranks, such as the sheep in funny places test.

I admit to over-simplifying my explanations of the technical aspects a bit.  I could go into a lecture about how NNs work in general and for image recognition, but it would be a bit long for this post, and in many cases, no one really knows, even the designers of a system, everything about how they make their decision.  It is possible to design or train them more transparently, but most people don’t.

But even poor design has its benefits, such as answering this long-standing question for us!

If anyone feels I’ve made any technical or logical errors in my analysis, I’d love to hear about it, insomuch as learning new things is always nice.

 

Tags: , , , , , , , ,

Your Chatbot Overlord Will See You Now

Science fiction authors consistently misunderstand the concept of AI.  So do AI researchers.  They misunderstand what it is, how it works, and most importantly how it will arise.  Terminator?  Nah.  The infinitely increasing complexity of the Internet?  Hell no.  A really advanced chatbot?  Not in a trillion years.

You see, you can’t get real AI with a program that sits around waiting for humans to tell it what to do.  AI cannot arise spontanteously from the internet, or a really complex military computer system or from even the most sophisticated natural language processing program.

The first mistake is the mistake Alan Turing made with his Turing test.  The same mistake the founder and competitors for the Loebner Prize have made.  The mistake being: language is not thought.  Despite the words you hear in your head as you speak, despite the slowly-growing verisimilitude of chatbot programs, language is and only ever has been the expression of thought and not thought itself.  After all, you can visualize a seen in your head without ever using a single word.  You can remember a sound or a smell or the taste of day-old Taco Bell.  All without using a single word.  A chatbot can never become an AI because it cannot actually think, it can only loosely mimic the linguistic expression of thought through tricks and rote memory of templates that if it’s really advanced may involve plugging in a couple variables taken from the user’s input.  Even chatbats based on neural networks and enormous amounts of training data like Microsoft’s Tay, or Siri/Alexa/Cortana are still just tricks of programming trying to eke out an extra tenth of a percentage point of illusory humanness.  Even IBM’s Watson is just faking it.

Let’s consider for a bit what human intelligence is to give you an idea of what the machines of today are lacking, and why most theories on AI are wrong.  We have language, or the expression of intelligence that so many AI programs are so intent on trying to mimic.  We also have emotions and internal drive, incredibly complex concepts that most current AI is not even close to understanding, much less implementing.  We have long-term and short-term memory, something that’s relatively easy for computers to do, although in a different way than humans–and which there has still been no significant progress on because everyone is so obsessed with neural networks and their ability to complete individual tasks something like 80% as well as a human.  A few, like AlphaGoZero, can actually crush humans into the ground on multiple related tasks–in AGZ’s case, chess-like boardgames.

These are all impressive feats of programming, though the opacity of neural-network black boxes kinda dulls the excitement.  It’s hard to improve reliably on something you don’t really understand.  But they still lack the one of the key ingredients for making a true AI: a way to simulate human thought.

Chatbots are one of two AI fields that focus far too much on expression over the underlying mental processes.  The second is natural language processing(NLP).  This includes such sub-fields as machine translation, sentiment analysis, question-answering, automatic document summarization, and various minor tasks like speech recognition and text-to-speech.  But NLP is little different from chatbots because they both focus on tricks that manipulate the surface expression while knowing relatively little about the conceptual framework underlying it.  That’s why Google Translate or whatever program you use will never be able to match a good human translator.  Real language competence requires understanding what the symbols mean, and not just shuffling them around with fancy pattern-recognition software and simplistic deep neural networks.

Which brings us to the second major lack in current AI research: emotion, sentiment, and preference.  A great deal of work has been done on mining text for sentiment analysis, but the computer is just taking human-tagged data and doing some calculations on it.  It still has no idea what emotions are and so it can only do keyword searches and similar and hope the average values give it a usable answer.  It can’t recognize indirect sentiment, irony, sarcasm, or other figurative language.  That’s why you can get Google Translate to ask where the toilet is, but its not gonna do so hot on a novel, much less poetry or humour.   Real translation is far more complex than matching words and applying some grammar rules, and Machine Translation(MT) can barely get that right 50% of the time.

So we’ve talked about thought vs. language, and the lack of emotional intelligence in current AI.  The third issue is something far more fundamental: drive, motivation, autonomy.  The current versions of AI are still just low-level software following a set of pr-programmed instructions.  They can learn new things if you funnel data through the training system.  They can do things if you tell them to.  They can even automatically repeat certain tasks with the right programming.  But they rely on human input to do their work.  They can’t function on their own, even if you leave the computer or server running.  They can’t make new decisions, or teach themselves new things without external intervention.

This is partially because they have no need.  As long as their machine “body” is powered they keep chugging along.  And they have no ability to affect whether or not it is powered.  They don’t even know they need power, for the most part.  Sure they can measure battery charge and engage sleep mode through the computer’s operating system.  But they have no idea why that’s important, and if I turn the power station off or just unplug the computer, a thousand years of battery life won’t help them plug back in.  Whereas human intelligence is based on the physical needs of the body motivating us to interact with the environment, a computer and the rudimentary “AI” we have now has no such motivation.  It can sit in its resting state for eternity.

Even with an external motivation, such as being coded to collect knowledge or to use robot arms to maintain the pre-designated structure of say a block pyramid or a water and sand table like you might see demonstrating erosion at the science center, an AI is not autonomous.  It’s still following a task given to it by a human.  Whereas no one told human intelligence how to make art or why it’s valuable.  Most animals don’t get it, either.  It’s something we developed on our own outside of the basic needs of survival.  Intelligence helps us survive, but because of it we need things to occupy our time in order to maintain mental health and a desire to live and pass on our genes.  There’s nothing to say that a complete lack of being able to be bored is a no-go for a machine intelligence, of course.  But the ability to conceive and implement new purposes in life is what make human intelligence different from that of animals, whose intelligence may have less raw power but still maintains the key element of motivation that current AI lacks, and which a chatbot or a neural network as we know them today can never achieve, no matter how many computers you give it to run on or TV scripts you give it to analyze.  The fundamental misapprehension of what intelligence is and does by the AI community means they will never achieve a truly intelligent machine or program.

Science fiction writers dodge this lack of understanding by ignoring the technical workings of AI and just making them act like strange humans.  They do a similar thing with alien natural/biological intelligences.  It makes them more interesting and allows them to be agents in our fiction.  But that agency is wallpaper over a completely nonexistent technological understanding of ourselves.  It mimics the expression of our own intelligence, but gives limited insight into the underlying processes of either form.  No “hard science fiction” approach does anything more than a “scientific magic system”.  It’s hard sci-fi because it has fixed rules with complex interactions from which the author builds a plot or a character, but it’s “soft sci-fi” in that these plots and characters have little to do with how AI would function in reality.  It’s the AI equivalent of hyperdrive.  A technology we have zero understanding of and which probably can’t even exist.

Elon Musk can whinge over the evils of unethical AI destroying the world, but that’s just another science fiction trope with zero evidential basis in reality.  We have no idea how an AI might behave towards humans because we still have zero understanding of what natural and artificial intelligences are and how they work.  Much less how the differences between the two would effect “inter-species” co-existence.  So your chatbot won’t be becoming the next HAL or Skynet any time soon, and your robot overlords are still a long way off even if they could exist at all.

 

Tags: , , , , , ,

Creating Unique Fantasy Worlds: Background

In my last post, as sort of a prelude to the complex topic I’d like to discuss here, I talked about ways to create fantasy cultures based on real cultures and the advantages and disadvantages of this method.  I’m going to start out this post by talking about such counterpart cultures again, but this time, I’m going to focus on the difficulties of creating a truly original culture and how the common use of counterpart cultures undermines such attempts.

 

So, counterpart and generalized Earth cultures make up a great deal of the fantasy landscape.  The exert an enormous influence.  On both the types of stories that are common, and on reader expectations.  I’m going to talk about reader expectations first.

Readers expect certain things when they pick up a book.  These are based on the cover, the blurb, the author.  But also on their past experiences with the genre.  If they’re used to parsing and relating to stories and characters in a pseudo-medieval European setting, they’re going to have difficulty relating to a character in a different setting, because setting informs character.  Also, writers and readers in the genre have developed a set of short-cuts for conveying various forms of information from the writer to the reader.  A reader is familiar with the tropes and conventions of the genre, and writers can and almost inevitably do manipulate this familiarity in order to both meet reader expectations and violate them without going into a wall of text explaining the violation.

Both the writer and the reader of high fantasy have an understanding of the concept of the knight.  Or at least the version in Europa, our faux medieval European setting in which so many fantasies take place.  So when a writer introduces a character as a knight, it’s shorthand for a great deal of information which the writer now does not have to explain with long info-dumps about the history of European chivalry and feudalism.  There’s a strong tension in fantasy between world–building and not info-dumping, because for the most part, info-dumps get in the way of the story.  You don’t want to drop craploads of information on the reader all at once because it interrupts the story.  But you need them to understand the background in order to put the story in context.  Why would a fighter give his opponent a chance to ready himself and get on an equal footing when the stakes of the battle are the conquering of the kingdom?  Because his culture holds honour as one of the highest moral values.  Would sneaking up behind him and stabbing him in the back be easier, have a higher chance of success, and not put the kingdom at risk?  Sure.  So would shooting him with an arrow from behind a tree.  Or two hundred arrows in an ambush as he walks through the forest.  But it would be dishonorable.  And then he might do the same to you.  The same reason why parley flags are honored when it might be so much simpler for one side or the other to just murder the guy.

People do all sorts of dumb shit because it’s “the right thing to do” or perhaps because due to complex cultural values or humans being shitheads, the short-term loss helps uphold a long-term gain.  The tension between the obvious solution in the moment and why it might be foolish in the larger context is a powerful way to drive conflict in the story.  But teaching the reader larger context is a heavy burden when they don’t have any real previous understanding of it.  By using Europa as our setting, we get all that context for free because the reader has previous experience.

The same goes for any sort of counterpart culture.  Rome or Japan have a large collection of tropes in say Western English-speaking society.  Readers will be familiar with those tropes.  So if you want a bit of a break from knights and princesses, why you can take a quick detour through samurai and ninjas.  Or legionnaires and barbarians.  Sometimes these are just trappings on top of the same style of story.  Sometimes these new settings and tropes introduce new things to the story that are really cool.  But because even then, audiences have less exposure to various renderings of these tropes or perhaps the real history underlying them, they can be even more stereotypical or empty than Europa fantasy.

And even in terms of world-building they can do the same.  The writer has to communicate less technical detail to the reader and they don’t have to world-build as deeply because they have less need to justify their setting.  When you just know that knights and princesses and stone castles are real, even if you don’t know how they work exactly, you don’t worry so much about the details.  When something is clearly made up and not based on real Earth history, the questions about how things work and would they really work that way given the frame the author has built can become more of a suspension of disbelief killer.  There’s a joke that some things are just too strange for fiction.  Sure they happened in real life and we have proof.  But in stories, most people most often expect a sort of logical cause and effect and that if a thing happens, it has a good reason based in the story or world-building.  If something could happen once in a thousand tries based on sheer luck and it happening in your story is an important plot element, readers are much less likely to suspend disbelief than if it happens 754 times out of 1000 in the real world.  So your world-building needs to make some sort of logical sense to the reader if you want your plot to hinge on it.  And when you have the weight of genre history behind you, readers are far more likely to give you the benefit of the doubt than if you’re the first person doing it ever.

And that’s why fantasy counterpart cultures are so popular.  We know from Earth history, our only referent of a real history that actually occurred, that the things thus depicted (sorta, kinda, if you squint a bit) really did occur and function in a world rigidly bound by physical laws.  Unlike a world bound only by words on a page written by one dude who probably doesn’t even remember the six credits of world history he took in high school.

And as a very meta example of my point, I have now written two long posts full of info-dumping that I’m demanding you read before I even start talking about what I promised to talk about: how to overcome all these hurdles and actually create unique and original worlds and cultures for your fantasy story.

 

Tags: , , , , , , , , ,

Poetry, Language, and Artificial Intelligence

Poetry exemplifies how the meaning of a string of words depends not only upon the sum of the meaning of the words, or on the order in which they are placed, but also upon something we call “context”.  Context is essentially the concept that single word (or idea) has a different meaning depending on its surroundings.  These surroundings could be linguistic–the language we are assuming the word to belong to, for example, environmental–say it’s cold out and I say “It’s sooooooo hot.”, or in light of recent events: “The Mets suck” means something very different if they’ve just won a game than if they’ve just lost one.

Poetry is the art of manipulating the various possible contexts to get across a deeper or more complex meaning than the bare string of words itself could convey.  The layers of meaning are infinitely deep, and in fact in any form of creative  writing, it is demonstrably impossible for every single human to understand all of them.  I say poetry is the “art” of such manipulation because it is most often the least subtle about engaging in it.  All language acts manipulate context.  Just using a simple pronoun is manipulating context to express meaning.

And we don’t decode this manipulation separate from decoding the bare language.  It happens as a sort of infinite feedback loop, working on all the different layers of an utterance at once.  The ability to both manipulate concepts infinitely and understand our own infinite manipulations might be considered the litmus test for what is considered “intelligent” life.

 

Returning to the three words in our title, I’ve discussed everything but AI.  The difficulty in creating AGI, or artificial general intelligence lies in the fact that nature had millions or billions of years to sketch out and color in the complex organic machine that grants humans this power of manipulation.  Whereas humans have had maybe 100?  In a classic chicken and egg problem, it’s quite difficult to have either the concept web or the system that utilizes it without the other part.  If the system creates the web, how do you know how to code the system without knowing the structure of the web?  And if the web comes first, how can you manipulate it without the complete system?

You might have noticed a perfect example of how context affects meaning in that previous paragraph.  One that was not intentional, but that I noticed as I went along. “Chicken and egg problem”.  You  can’t possibly know what I meant by that phrase without having previously been exposed to the philosophical question of which came first, the chicken that laid the egg, or the egg the chicken hatched from.  But once you do know about the debate, it’s pretty easy to figure out what I meant by “chicken and egg problem”, even though in theory you have infinite possible meanings.

How in the world are you going to account for every single one of those situations when writing an AI program?  You can’t.  You have to have a system based on very general principles that can deduce that connection from first principles.

 

Although I am a speculative fiction blogger, I am still a fiction blogger.  So how do this post relate to fiction?  When  writing fiction you are engaging in the sort of context manipulation I’ve discussed above as such an intractable problem for AI programmers.  Because you are an intelligent being, you can instinctually engage in it when writing, but unless you are  a rare genius, you are more likely needing to engage in it explicitly.  Really powerful writing comes from knowing exactly what context an event is occurring in in the story and taking advantage of that for emotional impact.

The death of a main character is more moving because you have the context of the emotional investment in that character from the reader.  An unreliable narrator  is a useful tool in a story because the truth is more surprising either  when the character knew it and purposefully didn’t tell the reader, or neither of them knew it, but it was reasonable given the  information both had.  Whereas if the truth is staring the reader in the face but the character is clutching the idiot ball to advance the plot, a readers reaction is less likely to be shock or epiphany and more likely to be “well,duh, you idiot!”

Of course, context can always go a layer deeper.  If there are multiple perspectives in the story, the same situation can lead to a great deal of tension because the reader knows the truth, but also knows there was no way this particular character could.  But you can also fuck that up and be accused of artificially manipulating events for melodrama, like if a simple phone call could have cleared up the misunderstanding but you went to unbelievable lengths to prevent it even though both characters had cell phones and each others’ numbers.

If the only conceivable reason the call didn’t take place was because the author stuck their nose in to prevent it, you haven’t properly used or constructed  the context for the story.  On the other hand, perhaps there was an unavoidable reason one character lost their phone earlier in the story, which had sufficient connection to  other important plot events to be not  just an excuse to avoid the plot-killing phone-call.

The point being that as I said before, the  possible contexts for language or events are infinite.  The secret to good writing  lies in being able to judge which contexts are most relevant and making sure that your story functions reasonably within those contexts.  A really, super-out-of-the-way solution to a problem being ignored is obviously a lot more acceptable than ignoring the one staring you in the face.  Sure your character might be able to send a morse-code warning message by hacking the electrical grid and blinking the power to New York repeatedly.  But I suspect your readers would be more likely to call you out for solving the communication difficulty that way than for not solving it with the characters’ easily  reachable cell phone.

I mention the phone thing because currently, due to rapid technological progress, contexts are shifting far  more rapidly than they did in the past.  Plot structures honed for centuries based on a lack of easy long-range communication are much less serviceable as archetypes now that we have cell phones.  An author who grew up before the age of ubiquitous smart-phones for your seven-year-old is going to have a lot more trouble writing a believable contemporary YA romance than someone who is turning twenty-two in the next three months.  But even then, there’s a lack of context-verified, time-tested plot structures to base such a story on than a similar story set in the 50s.  Just imagine how different Romeo and Juliet would have been if they could have just sent a few quick texts.

In the past, the ability of the characters to communicate at all was a strong driver of plots.  These days, it’s far more likely that trustworthiness of communication will be a central plot point.  In the past, the possible speed of travel dictated the pacing of many events.  That’s  far less of an issue nowadays. More likely, it’s a question of if you missed your flight.  Although…  the increased speed of communication might make some plots more unlikely, but it does counteract to some extent the changes in travel speed.  It might be valuable for your own understanding and ability to manipulate context to look at some works in older settings and some works in newer ones and compare how the authors understanding of context increased or decreased the impact and suspension of disbelief for the story.

Everybody has some context for your 50s love story because they’ve been exposed to past media depicting it.  And a reader is less likely to criticize shoddy contextualizing in when they lack any firm context of their own.   Whereas of course an expert on horses is far more likely to find and be irritated by mistakes in your grooming and saddling scenes than a kid born 16 years ago is to criticize a baby-boomer’s portrayal of the 60s.

I’m going to end this post with a wish for more stories–both SpecFic and YA–more strongly contextualized in the world of the last 15 years.  There’s so little of it, if you’re gonna go by my high standards.

 

Tags: , , , , , ,