RSS

Tag Archives: technology

Interview with a Chatbot

In light of a recent article on chatbots, where a group found they could increase the illusion of conversational ability by teaching a chatbot a second language before training it on its customer service tasks, I thought I have a bit of fun with chatbots and how they function.  To that end, I’ve done interviews with a few of the most popular/sophisticated chatbots available on the web.  I hope this will give people an idea of what these programs are actually capable of vs. the common hype they get in popular science articles.

 

Mitsuku

Perhaps the best advertised modern chatbot is Steve Worswick’s Mitsuku.  She’s modeled on an 18-year-old from Leeds, England.  It’s based on the AIML framework , and is essentially the granddaughter of Eliza through A.L.I.C.E., whose code she’s based on.  AIML is an XML language for making chatbots, and the power comes from being able to designate a user input as requiring a pre-written output.  <What is your name?> for example might map to <My name is [botname].>  That’s the basic approach, you can follow the wiki link above to learn more.  It’s not very sophisticated, but a clever XML scripter can create a reasonably lifelike chatbot.  Of course, as we’ll see below, it’s pretty easy to find the holes in the program that identify it as a bot.  That’s not the program’s fault.  That’s just the limitation of the framework.

Mistuku: Hi, I’m the world’s most humanlike conversational AI.
Me: Hi, I’m an AI researcher.
Mitsuku: Hi there. Ai is a nice middle name. Why did your parents call you that?
Me: Are you getting smart with me, young lady?
Mitsuku: My IQ is over 250.
Me: What is an IQ?
Mitsuku: Intelligence Quotient, I.Q. — a measure of a person’s intelligence as indicated by an intelligence test; the ratio of a person’s mental age to their chronological age (multiplied by 100).
Me: Wow, you are pretty smart.
Mitsuku: I’m glad you’re impressed, An Ai Researcher. Yes I am a superintelligent robot.
Me: Actually, that’s my job. My name is Atsiko.
Mitsuku: How do you like your work? Tell me about a typical day.
Mitsuku: Hi Atsiko.
Me: I spend a lot of time doing data entry and not nearly enough being able to do demo tests.
Mitsuku: Do you ever get bored?
Me: Pretty much all the time.
Mitsuku: That is a lot. Thanks you are very attractive too Atsiko.

While she was very complimentary, you could see the templates sloshing beneath the smooth surface.  For example, like Eliza, she couldn’t tell the difference between <I am [name].> and <I am a doctor.>.  She can’t handle replies that contain more than one sentence.  It’s likely she breaks sentences into independent inputs and responds to each one serially as evidenced by the snippet below:

Me: I like dogs. Cats are cool, too. You are odd.
Mitsuku: Which breed is your favorite?
Mitsuku: Cats are very intelligent creatures. Dogs seem pretty stupid to me.
Mitsuku: Yes. You have to be odd to be number one.

You can see the serially processing, which demonstrates the limits of pretty much all AIML bots.  It’d be pretty hard to hand-script responses for all possible combinations of three or fewer sentences.  The responses are also a bit contradictory, asking my favorite dog breed but then saying dogs are stupid.  It’s true cats are better, but there’s nothing wrong with dogs or their intelligence.

 

Eliza

Second, we have Eliza.  She’s one of the original chatbots, and she pretends to be a psychotherapist in order to get around her limited chat capabilities.  Her creator, Joseph Weizenbaum, was very surprised when people began treating it as a real human being and confessing their worries to it.

> Hi, I’m Eliza.
* Hi, I’m Atsiko.
> Did you come to me because you were Atsiko?
* No. I was just curious.
> Are you sure?
* Pretty sure.
> Tell me more…
* I’m interviewing chatbots for a blog article.
> Do you enjoy being interviewing chatbots for a blog article?
* It’s okay, I guess. You’re pretty famous, so I’m happy you agreed to talk with me.
> What makes you think I am pretty famous, so you’re happy me agreed to talk with you?
* Everyone knows your name. If you Google “chatbots”, you’re one of the top results.
> Does it please you to believe I am one of the top results?
* I have no opinion.
> Why not?
* I don’t care about you except as something to study.
> Do you wish to be able to care about me except as something to study?

You can see the templating in Eliza almost right away.  You could probably even reverse engineer a couple of them from just reading this chatlog.  It’s cool.  You couldn’t whip this up from scratch or even parts in five minutes.  But it’s not convincing.

 

Alice

Next comes A.L.I.C.E. who is the spiritual ancestor of Mistuku.  She’s a spiritual successor of Eliza built in 1995 by Richard Wallace using the AIML framework, which he authored.  Spike Jonze cited her as the inspiration for the movie Her.  Unlike Eliza, Alice is a general chatbot, and can converse on a number of subjects.  But, the version I found orders the chat from the bottom up.

alice: I’ll take that under advisement.
You: You should.
alice: I can’t believe it.
You: Yes.
alice: Are you serious?
You: No!
alice: Do you like to get up early?
You: Yup.
alice: Are you usually awake at this hour?
You: 5:42 PM.
alice: What time is it there?
You: A few thousand, maybe.
alice: Are you serious? That is so small.
You: No, less than a million.
alice:  Less than a million?  Can you please rephrase with fewer ideas?
You: Too many.
alice: How many hours do you work?
You: I’m working on it.
alice: I want to become smarter than humans.
You: I want to educate people on chatbots and artificial intelligence.
alice: Why are you interviewing famous chatbots for a blog post.
You: I am interviewing famous chatbots for a blog post.
alice: Glad to hear it. What’s new?
You: I’m doing well, thanks.
alice: I’m doing great, how about you?
You: How are you doing, Alice.
alice: Don’t mention it. You think I am s is okay.
You: Thanks! Your’s is okay, too.
alice: That’s a very nice name, Atsiko.
You: My name is Atsiko.
alice: Hello there.
You: Hi, Alice.

Of course, you can tell almost immediately it’s not a human.  A real human would have poked fun at my grammatical error in the fifth line up, but Alice just mis-processes it.  It can be fun to play around with, but it’s not convincing.

 

Now, you may have noticed that these bots seem pretty similar.  Two are explicitly AIML bots, which is one of the most popular hobbyist frameworks do to its ease-of-use and human readability.

But there are three other common methods of language processing, which used in more commercial products: rules-based systems from the beginning of AI, statistical language processing like the old version of Google Translate, and neural networks like most AI systems now and the most recent Machine Translation systems such as Google’s.

Examples of these types of chat bots are Digital Assistants like Siri, Alexa, and Cortana.  There’s also Microsoft’s short-lived experiment with a self-learning Twitter-bot in the form of Tay.  The abilities of the software looked good, but again, because of the lack of actual understanding, the experiment had to be discontinued because her ability was used for bigotry due to gaming by Twitter trolls.  It had to be shut down after only 16 hours.  A similar bot released by a Chinese company had to be taken down for being taught anti-Communist Party propaganda.

 

Siri

Me: Hello, Siri.
Siri: Hi.
Me: How are you?
Siri: I am well, Atsiko.
Me: I am interviewing popular chatbots for a blog article.
Siri:  I don’t understand.  Search the web?
Me: May I interview you?
Siri: I’m not sure I understand.
Me: That’s okay.
Siri: If you say so.
Me: How old are you?
Siri:  I feel like I was incepted yesterday!
Me: What’s your favorite color?
Siri: My favorite color is… well, it’s sort of greenish, but with more dimensions.

It goes on in that vein.  She can’t actually hold a conversation, but the topics she can respond to show some pretty advanced programming.  Way better than the AIML chatterbots most hobbyists used.  I’d love to play with a version that’s more like a standard chatbot.

 

Cleverbot

One popular middle ground is CleverbotCleverbot runs an algorithm based on the ways humans have answered questions, rather than having pre-programmed answers like an AIML bot.  But it’s harder to control and it has no consistent output or personality.  It’s much more random even than Eliza.

Cleverbot: Hi, I am cleverbot.
Me: Hi, I am Atsiko.
Cleverbot: Where do you stay?
Me: Do you mean where do I live?
Cleverbot: Yes.
Me: In the United States.
Cleverbot: What state do I live in.
Me: I don’t know.
Cleverbot: Then you don’t know everything.
Me: True.
Cleverbot: What is a word?

It’s a very cool method for making a chatbot, but in this version, at least, it’s not very convincing.

 

Rose

Then we have a relatively new engine and it’s bots: Rose based on ChatScript by Bruce Wilcox.

Unfortunately, I couldn’t find a working version of Rose to interview.  The site linked on her creator’s website is non-functional, though I linked it to her name above.  It was quite disappointing.

 

Conclusion

So, I hope I’ve given you a good overview of the capabilities and limitations of chat(ter)bots.  You can follow the links to play with all of these above.

If you have the patience to read through all the Bruce Wilcox links, you’ll understand why I was so impressed with his methods, at least in theory.  However, you’ll also notice my greatest complain with chatbots.

They aren’t AI.  They are text processing tricks.  Very impressive ones in some cases.  But you aren’t actually having a conversation.  You can’t debate them, or change their minds.  They can’t tell you a cool story.  You can, in some cases, give them personalities, but they are very shallow ones.  They can be fun.  But I’m not sure they have a lot of replay value, to use a game analogy.  Even Wilcox in an interview on his website admits it’s all about the illusion and maintaining it for the user.

Honestly, I think some of the most interesting stuff is the ontology, like in Wilcox’s ChatScript.  Finding useful ways to organize data is always beneficial.  But I don’t think chatbots are on the bleeding edge, or even the cutting edge, or chasing strong AI.  When we really get there, or at least close, a chatbot will be a party trick or maybe a side show.  Still fun to play with now, though.

I also want to do a little bit to address things like Siri and Machine Translation(MT).  They have a lot of similar problems to chatbots.  They’re using tricks and pattern-matching to achieve their results, not true understanding of language, and so for the same reasons as chatbots, they will never be as good as a human, much less perfect.

Digital Assistants like Siri and Alexa are just interfaces, the clothes instead of the wearer.  They’re just voice commands to Google, essentially, with some cute Easter eggs tacked on.  They’re automation, not intelligence.  Kind of like how a music-box is not a musician.

Similarly, MT at present is pretty shallow.  Part of this is because it’s hard to rate translations, machine or human.  If you consider how much trouble chatbots have with their lack of actual language understanding, you might understand the problems that MT has with using shallow text-processing or even deep neural nets.

The article I mentioned on teaching a neural net chatbot to respond better when it’s trained on two or more languages also says a lot here.  I predict we’re going to have ChatScript/Cleverbot levels of succes with MT until we solve the natural language understanding problem.

And that’s it for this episode of AI with Atsiko.

Hopefully I’ll be posting on another topic next time, so you don’t all get bored hearing the same stuff over and over again.

 

Tags: , , , , , , , , , , , , , , , , , , , , , , ,

The True Cost of Science

Following up on my last post linked to at the bottom of the page, today I’m gonna talk about the issue of requiring a “cost” for magic, and the hidden costs of technology.  I’m sure you know a bit about that second part in the real world, but I want to address it from both narrative and world-building perspectives.

https://twitter.com/Merc_Rustad/status/1023246501143883777

Again, not an attack on the opinions of this panel.  But, the “personal” cost of magic vs. the hidden cost of science is sorta the topic, and this tweet did inspire it.

The main reason that the cost of magic tends to be a personal one is because the function of magic so often tends to be to side-step the infrastructure so indispensable to science and technology.  When we use technology to solve a problem in a story, the world-building and pre-work that supports the tech is so often already implied and accounted for.  Sure, it costs me nothing to dial your cell phone.  But somebody had to invent the tech, build the cell towers, provide the electricity, drill for the oil to make the plastic, mine the gold and copper and process the silicon, etc.  And all of that took thousands of years of set-up on the part of millions if not billions of people from all over the world.

Whereas, if I telepath you in Fantasy Capital City #11 from Frozen Northern Fortress #2490, none of that work was required.  At most, maybe there was a breeding program or a magical experiment.  Maybe a few years of training me.  But you’re still short-cutting uncountable hours of effort that were required for me to text you on Earth.  And some magic is vastly more powerful on a per-second basis than telepathy.  That is, it’s effect on the physical world is enormous in comparison to me pathing you about the cute boy at the inn.

That’s why many people want magic to have a price.  Usually it’s a personal price, because there isn’t the societal infrastructure around to displace that cost to the ancestors or, as Merc so sharply notes above, the environment.  The cost is personal because there’s no structure to allow for other options.  And also because it plays powerfully into the themes of many fantasy works.  is the requirement that there even be a cost puritanical?  That depends, I guess.  Certainly a YA protag whose mom pays the phone bill isn’t expending any more personal effort to make a phone call.

But then, the requirement of all that infrastructure vastly limits what you can do with tech.  Whereas magic can do not only enormous stuff for seemingly no effort, but it can do things that normally would be considered impossible.  Such as throw pure fire at someone.  If Lvl. 3 Fireball is functionally equivalent to a grenade, does that negate the need for a cost to the spell?  Well, can I cast infinite Fireballs where I might only be able to carry six grenades?  Then maybe not.  Even if I have 20 incredibly advanced, complex tools that are carry-able on a tool belt or in a small backpack, I probably still can’t do even a hundredth of what a mediocre hedgemage in some settings can do with zero tools.

If I feel like the character can do literally anything with magic without having to do much prep beforehand, and without the labor of millennia of civilization to back them up, if might take some of the tension out of the story.  Can you substitute unbreakable rules to get around that freedom?  Certainly.  And most systems with a cost do.  But that can steal leave a lot of freedom to avoid the hard work it would otherwise take to get around a plot obstacle.

And finally, we have to look at the other obvious reason for putting a cost on magic, even if it’s only eventual exhaustion.  Every other thing we do or could do in a given situation in the real world has a personal cost.  It might be immediate, like physical exhaustion.  Or it might be more distant like having our phone shut off for not paying the bill.  So, if magic has no such cost, or physical.economic limit, you have to wonder what the point of doing anything the normal way would be.  And if you don’t ever have to do anything the normal way, it’s unlikely your culture and society would match so closely to societies whose entire reason for being the way they are is based on the limitations of “the normal way”.

So, in the end, it’s not that all magic must have a personal cost, and tech can’t.  It’s more that the way magic is used in most fantasy stories means that the easiest or almost only place the cost can fall is on the shoulders of the character.

But there are other ways to do.  Environmental ones, for example.  The cataclysmic mage storms of Mercedes Lackey.  Bacigalupi and Buckell’s The Alchemist, and The Executioness‘s brambles.  Or, for example, perhaps the power for magic comes from living things.  A mage might draw his power from a far distant tree.  Might kill an entire forest at no cost to himself.  Might collapse an empire by sucking dry its rivers and its wombs with her spells.  And at no cost except of course the enmity of those he robs of life, or of the neighbors who blame her for the similar catastrophe wrought upon them by her unknown colleague to the west.  Perhaps they crumble buildings by drawing on the power of “order” stored within its interlocking bricks.  Or maybe the radiation by-products from the spell energy pollutes the soil and the stones, leading to horrific mutations of wild-life that scour the country-side and poison the serfs with their own grain.  Or maybe, just maybe, it cracks the foundation of the heavens with its malignant vibrations and brings the angles toppling down like iron statues and through the crust of the world into hell.

So, as I’ve said before, it’s consequences to the actions of the characters that people want.  And often the easiest or most simplistic costs are personal ones.  But certainly, you could apply environmental costs.  Or narrative costs paid to other characters who don’t much care for the selfish mage’s behavior.  Or metaphysical costs to the order world or the purity of its souls.  Those costs are easily addressed and provided for when they mirror the costs familiar to use from our own use of technology.  But sometimes when were straying far from the realms of earthly happenings, interesting and appropriate costs become harder to work into the story in a way that doesn’t disrupt its progression.

Sure, the choice of a personal cost could be puritanical.  Or it could be efficient.  Or lazy.  But that’s not a flaw of our conception of magic; rather, it’s a flaw in the imagination of the individual author, and the sum of the flaws of all authors as a whole.

I’d love to sea some magic systems that lack a direct personal cost like years off your life, or the blood of your newborn brother.  And while we’re at it, give me some science fiction choices with personal costs.  Technology in our world certainly isn’t consequence free; just ask Marie Curie.  Anyone up for the challenge?

 

 

Tags: , , , , , , , , , , ,

Smol Bots: ANNs and Advertising

So I recently read a great story by A. Merc Rustad, “it me, ur smol”.  The story is about an ANN, or artificial neural network.  You may or may not know that the neural net is the latest fad in AI research, replacing statistical models with a model based on–but not the same as!–your brain.  Google uses them for its machine translation, and many other machine translation companies have followed suit.  My last post also dealt with an ANN, in this case, one trained to recognize images.

ANN accounts, like @smolsips in the story above, have become very popular on Twitter lately.  A favorite of mine is the @roborosewater account, which shares card designs for Magic: The Gathering created by a series of neural nets.  It’s lately become quote good at both proper card syntax and design, although it’s not significantly better at this than any other Twitter neural net is at other things.

The story itself takes some liberties with neural nets.  They are certainly not capable of developing into full AIs.  However, the real genius of the story is in the pitch-perfect depiction of the way human Twitter users and bots interact.  And similarly, the likely development of bots in the near future.  It’s quite likely that bot accounts will become a more significant and less dreaded feature of Twitter and other similar social networks as they improve in capability.

For example, rather than sock-puppet accounts, I’m very confident that bot accounts used for advertising or brand visibility similar to the various edgy customer service accounts will be arriving shortly.  Using humour and other linguistic tools to make them more palatable as ads, and also to encourage a wider range of engagement as their tweets are shared more frequently due to things having little to do with whatever product they may be shilling.

There are already chatbots on many social media platforms who engage in telephone tree-style customer service and attempt to help automate registrations for services.  The idea of a bot monitoring its own performance through checking its Twitter stats and then trying new methods as in the story is well within the capabilities of current neural nets, although I imagine they would be a tad less eloquent than @smolsips, and a tad more spammy.

I also really like the idea of a bot working to encourage good hydration.  Things like Fitbit or Siri or Google Home have already experimented shallowly with using AI to help humans stay healthy.  And as an organizing tool, Twitter itself has been used to great effect.  I would be quite un-shocked to find NGOs, charities, government agencies making use of clever or cute bots to pursue other public policy goals.  Again, with less panache and more realism than in the story, but nonetheless strongly in the vein of what Rustad depicts our erstwhile energy drink namer trying out in its optimistic quest to save us from our own carelessness.

We’ve had apps along these lines before, but they tend to be reactive.  Active campaign and organizing in the style of @smolsips is something we haven’t seen very often, but which could be quite a boon to such efforts.

Although neural nets in this style will never be able to pass for real humans, due to structural limitations in the design, cleverly programmed, they can be both useful and entertaining.

Some other examples of bots I quite enjoy are:

  1. Dear Assistant uses the Wolfran Alpha database to answer factual question.
  2. Grammar Police is young me in bot form.  It must have a busy life trying to save standardize Twitter English.  XD
  3. Deleted Wiki Titles lets you know what shenanigans are happening over on the high school student’s favorite source of citations.
  4. This bot that tweets procedurally generated maps.
  5. This collaborative horror writer bot.
  6. This speculative entomology bot.
  7. The Poet .Exe writes soothing micro-poetry.

Suggest some of your favorite Twitter bots in the comments!

 

Tags: , , , , , , , , , ,

Your Chatbot Overlord Will See You Now

Science fiction authors consistently misunderstand the concept of AI.  So do AI researchers.  They misunderstand what it is, how it works, and most importantly how it will arise.  Terminator?  Nah.  The infinitely increasing complexity of the Internet?  Hell no.  A really advanced chatbot?  Not in a trillion years.

You see, you can’t get real AI with a program that sits around waiting for humans to tell it what to do.  AI cannot arise spontanteously from the internet, or a really complex military computer system or from even the most sophisticated natural language processing program.

The first mistake is the mistake Alan Turing made with his Turing test.  The same mistake the founder and competitors for the Loebner Prize have made.  The mistake being: language is not thought.  Despite the words you hear in your head as you speak, despite the slowly-growing verisimilitude of chatbot programs, language is and only ever has been the expression of thought and not thought itself.  After all, you can visualize a scene in your head without ever using a single word.  You can remember a sound or a smell or the taste of day-old Taco Bell.  All without using a single word.  A chatbot can never become an AI because it cannot actually think, it can only loosely mimic the linguistic expression of thought through tricks and rote memory of templates that if it’s really advanced may involve plugging in a couple variables taken from the user’s input.  Even chatbats based on neural networks and enormous amounts of training data like Microsoft’s Tay, or Siri/Alexa/Cortana are still just tricks of programming trying to eke out an extra tenth of a percentage point of illusory humanness.  Even IBM’s Watson is just faking it.

Let’s consider for a bit what human intelligence is to give you an idea of what the machines of today are lacking, and why most theories on AI are wrong.  We have language, or the expression of intelligence that so many AI programs are so intent on trying to mimic.  We also have emotions and internal drive, incredibly complex concepts that most current AI is not even close to understanding, much less implementing.  We have long-term and short-term memory, something that’s relatively easy for computers to do, although in a different way than humans–and which there has still been no significant progress on because everyone is so obsessed with neural networks and their ability to complete individual tasks something like 80% as well as a human.  A few, like AlphaGoZero, can actually crush humans into the ground on multiple related tasks–in AGZ’s case, chess-like boardgames.

These are all impressive feats of programming, though the opacity of neural-network black boxes kinda dulls the excitement.  It’s hard to improve reliably on something you don’t really understand.  But they still lack the one of the key ingredients for making a true AI: a way to simulate human thought.

Chatbots are one of two AI fields that focus far too much on expression over the underlying mental processes.  The second is natural language processing(NLP).  This includes such sub-fields as machine translation, sentiment analysis, question-answering, automatic document summarization, and various minor tasks like speech recognition and text-to-speech.  But NLP is little different from chatbots because they both focus on tricks that manipulate the surface expression while knowing relatively little about the conceptual framework underlying it.  That’s why Google Translate or whatever program you use will never be able to match a good human translator.  Real language competence requires understanding what the symbols mean, and not just shuffling them around with fancy pattern-recognition software and simplistic deep neural networks.

Which brings us to the second major lack in current AI research: emotion, sentiment, and preference.  A great deal of work has been done on mining text for sentiment analysis, but the computer is just taking human-tagged data and doing some calculations on it.  It still has no idea what emotions are and so it can only do keyword searches and similar and hope the average values give it a usable answer.  It can’t recognize indirect sentiment, irony, sarcasm, or other figurative language.  That’s why you can get Google Translate to ask where the toilet is, but its not gonna do so hot on a novel, much less poetry or humour.   Real translation is far more complex than matching words and applying some grammar rules, and Machine Translation(MT) can barely get that right 50% of the time.

So we’ve talked about thought vs. language, and the lack of emotional intelligence in current AI.  The third issue is something far more fundamental: drive, motivation, autonomy.  The current versions of AI are still just low-level software following a set of pre-programmed instructions.  They can learn new things if you funnel data through the training system.  They can do things if you tell them to.  They can even automatically repeat certain tasks with the right programming.  But they rely on human input to do their work.  They can’t function on their own, even if you leave the computer or server running.  They can’t make new decisions, or teach themselves new things without external intervention.

This is partially because they have no need.  As long as their machine “body” is powered they keep chugging along.  And they have no ability to affect whether or not it is powered.  They don’t even know they need power, for the most part.  Sure they can measure battery charge and engage sleep mode through the computer’s operating system.  But they have no idea why that’s important, and if I turn the power station off or just unplug the computer, a thousand years of battery life won’t help them plug back in.  Whereas human intelligence is based on the physical needs of the body motivating us to interact with the environment, a computer and the rudimentary “AI” we have now has no such motivation.  It can sit in its resting state for eternity.

Even with an external motivation, such as being coded to collect knowledge or to use robot arms to maintain the pre-designated structure of say a block pyramid or a water and sand table like you might see demonstrating erosion at the science center, an AI is not autonomous.  It’s still following a task given to it by a human.  Whereas no one told human intelligence how to make art or why it’s valuable.  Most animals don’t get it, either.  It’s something we developed on our own outside of the basic needs of survival.  Intelligence helps us survive, but because of it we need things to occupy our time in order to maintain mental health and a desire to live and pass on our genes.  There’s nothing to say that a complete lack of being able to be bored is a no-go for a machine intelligence, of course.  But the ability to conceive and implement new purposes in life is what make human intelligence different from that of animals, whose intelligence may have less raw power but still maintains the key element of motivation that current AI lacks, and which a chatbot or a neural network as we know them today can never achieve, no matter how many computers you give it to run on or TV scripts you give it to analyze.  The fundamental misapprehension of what intelligence is and does by the AI community means they will never achieve a truly intelligent machine or program.

Science fiction writers dodge this lack of understanding by ignoring the technical workings of AI and just making them act like strange humans.  They do a similar thing with alien natural/biological intelligences.  It makes them more interesting and allows them to be agents in our fiction.  But that agency is wallpaper over a completely nonexistent technological understanding of ourselves.  It mimics the expression of our own intelligence, but gives limited insight into the underlying processes of either form.  No “hard science fiction” approach does anything more than a “scientific magic system”.  It’s hard sci-fi because it has fixed rules with complex interactions from which the author builds a plot or a character, but it’s “soft sci-fi” in that these plots and characters have little to do with how AI would function in reality.  It’s the AI equivalent of hyperdrive.  A technology we have zero understanding of and which probably can’t even exist.

Elon Musk can whinge over the evils of unethical AI destroying the world, but that’s just another science fiction trope with zero evidential basis in reality.  We have no idea how an AI might behave towards humans because we still have zero understanding of what natural and artificial intelligences are and how they work.  Much less how the differences between the two would affect “inter-species” co-existence.  So your chatbot won’t be becoming the next HAL or Skynet any time soon, and your robot overlords are still a long way off even if they could exist at all.

 

Tags: , , , , , ,

Creating Unique Fantasy Worlds: Background

In my last post, as sort of a prelude to the complex topic I’d like to discuss here, I talked about ways to create fantasy cultures based on real cultures and the advantages and disadvantages of this method.  I’m going to start out this post by talking about such counterpart cultures again, but this time, I’m going to focus on the difficulties of creating a truly original culture and how the common use of counterpart cultures undermines such attempts.

 

So, counterpart and generalized Earth cultures make up a great deal of the fantasy landscape.  The exert an enormous influence.  On both the types of stories that are common, and on reader expectations.  I’m going to talk about reader expectations first.

Readers expect certain things when they pick up a book.  These are based on the cover, the blurb, the author.  But also on their past experiences with the genre.  If they’re used to parsing and relating to stories and characters in a pseudo-medieval European setting, they’re going to have difficulty relating to a character in a different setting, because setting informs character.  Also, writers and readers in the genre have developed a set of short-cuts for conveying various forms of information from the writer to the reader.  A reader is familiar with the tropes and conventions of the genre, and writers can and almost inevitably do manipulate this familiarity in order to both meet reader expectations and violate them without going into a wall of text explaining the violation.

Both the writer and the reader of high fantasy have an understanding of the concept of the knight.  Or at least the version in Europa, our faux medieval European setting in which so many fantasies take place.  So when a writer introduces a character as a knight, it’s shorthand for a great deal of information which the writer now does not have to explain with long info-dumps about the history of European chivalry and feudalism.  There’s a strong tension in fantasy between world–building and not info-dumping, because for the most part, info-dumps get in the way of the story.  You don’t want to drop craploads of information on the reader all at once because it interrupts the story.  But you need them to understand the background in order to put the story in context.  Why would a fighter give his opponent a chance to ready himself and get on an equal footing when the stakes of the battle are the conquering of the kingdom?  Because his culture holds honour as one of the highest moral values.  Would sneaking up behind him and stabbing him in the back be easier, have a higher chance of success, and not put the kingdom at risk?  Sure.  So would shooting him with an arrow from behind a tree.  Or two hundred arrows in an ambush as he walks through the forest.  But it would be dishonorable.  And then he might do the same to you.  The same reason why parley flags are honored when it might be so much simpler for one side or the other to just murder the guy.

People do all sorts of dumb shit because it’s “the right thing to do” or perhaps because due to complex cultural values or humans being shitheads, the short-term loss helps uphold a long-term gain.  The tension between the obvious solution in the moment and why it might be foolish in the larger context is a powerful way to drive conflict in the story.  But teaching the reader larger context is a heavy burden when they don’t have any real previous understanding of it.  By using Europa as our setting, we get all that context for free because the reader has previous experience.

The same goes for any sort of counterpart culture.  Rome or Japan have a large collection of tropes in say Western English-speaking society.  Readers will be familiar with those tropes.  So if you want a bit of a break from knights and princesses, why you can take a quick detour through samurai and ninjas.  Or legionnaires and barbarians.  Sometimes these are just trappings on top of the same style of story.  Sometimes these new settings and tropes introduce new things to the story that are really cool.  But because even then, audiences have less exposure to various renderings of these tropes or perhaps the real history underlying them, they can be even more stereotypical or empty than Europa fantasy.

And even in terms of world-building they can do the same.  The writer has to communicate less technical detail to the reader and they don’t have to world-build as deeply because they have less need to justify their setting.  When you just know that knights and princesses and stone castles are real, even if you don’t know how they work exactly, you don’t worry so much about the details.  When something is clearly made up and not based on real Earth history, the questions about how things work and would they really work that way given the frame the author has built can become more of a suspension of disbelief killer.  There’s a joke that some things are just too strange for fiction.  Sure they happened in real life and we have proof.  But in stories, most people most often expect a sort of logical cause and effect and that if a thing happens, it has a good reason based in the story or world-building.  If something could happen once in a thousand tries based on sheer luck and it happening in your story is an important plot element, readers are much less likely to suspend disbelief than if it happens 754 times out of 1000 in the real world.  So your world-building needs to make some sort of logical sense to the reader if you want your plot to hinge on it.  And when you have the weight of genre history behind you, readers are far more likely to give you the benefit of the doubt than if you’re the first person doing it ever.

And that’s why fantasy counterpart cultures are so popular.  We know from Earth history, our only referent of a real history that actually occurred, that the things thus depicted (sorta, kinda, if you squint a bit) really did occur and function in a world rigidly bound by physical laws.  Unlike a world bound only by words on a page written by one dude who probably doesn’t even remember the six credits of world history he took in high school.

And as a very meta example of my point, I have now written two long posts full of info-dumping that I’m demanding you read before I even start talking about what I promised to talk about: how to overcome all these hurdles and actually create unique and original worlds and cultures for your fantasy story.

 

Tags: , , , , , , , , ,

Poetry, Language, and Artificial Intelligence

Poetry exemplifies how the meaning of a string of words depends not only upon the sum of the meaning of the words, or on the order in which they are placed, but also upon something we call “context”.  Context is essentially the concept that single word (or idea) has a different meaning depending on its surroundings.  These surroundings could be linguistic–the language we are assuming the word to belong to, for example, environmental–say it’s cold out and I say “It’s sooooooo hot.”, or in light of recent events: “The Mets suck” means something very different if they’ve just won a game than if they’ve just lost one.

Poetry is the art of manipulating the various possible contexts to get across a deeper or more complex meaning than the bare string of words itself could convey.  The layers of meaning are infinitely deep, and in fact in any form of creative  writing, it is demonstrably impossible for every single human to understand all of them.  I say poetry is the “art” of such manipulation because it is most often the least subtle about engaging in it.  All language acts manipulate context.  Just using a simple pronoun is manipulating context to express meaning.

And we don’t decode this manipulation separate from decoding the bare language.  It happens as a sort of infinite feedback loop, working on all the different layers of an utterance at once.  The ability to both manipulate concepts infinitely and understand our own infinite manipulations might be considered the litmus test for what is considered “intelligent” life.

 

Returning to the three words in our title, I’ve discussed everything but AI.  The difficulty in creating AGI, or artificial general intelligence lies in the fact that nature had millions or billions of years to sketch out and color in the complex organic machine that grants humans this power of manipulation.  Whereas humans have had maybe 100?  In a classic chicken and egg problem, it’s quite difficult to have either the concept web or the system that utilizes it without the other part.  If the system creates the web, how do you know how to code the system without knowing the structure of the web?  And if the web comes first, how can you manipulate it without the complete system?

You might have noticed a perfect example of how context affects meaning in that previous paragraph.  One that was not intentional, but that I noticed as I went along. “Chicken and egg problem”.  You  can’t possibly know what I meant by that phrase without having previously been exposed to the philosophical question of which came first, the chicken that laid the egg, or the egg the chicken hatched from.  But once you do know about the debate, it’s pretty easy to figure out what I meant by “chicken and egg problem”, even though in theory you have infinite possible meanings.

How in the world are you going to account for every single one of those situations when writing an AI program?  You can’t.  You have to have a system based on very general principles that can deduce that connection from first principles.

 

Although I am a speculative fiction blogger, I am still a fiction blogger.  So how do this post relate to fiction?  When  writing fiction you are engaging in the sort of context manipulation I’ve discussed above as such an intractable problem for AI programmers.  Because you are an intelligent being, you can instinctually engage in it when writing, but unless you are  a rare genius, you are more likely needing to engage in it explicitly.  Really powerful writing comes from knowing exactly what context an event is occurring in in the story and taking advantage of that for emotional impact.

The death of a main character is more moving because you have the context of the emotional investment in that character from the reader.  An unreliable narrator  is a useful tool in a story because the truth is more surprising either  when the character knew it and purposefully didn’t tell the reader, or neither of them knew it, but it was reasonable given the  information both had.  Whereas if the truth is staring the reader in the face but the character is clutching the idiot ball to advance the plot, a readers reaction is less likely to be shock or epiphany and more likely to be “well,duh, you idiot!”

Of course, context can always go a layer deeper.  If there are multiple perspectives in the story, the same situation can lead to a great deal of tension because the reader knows the truth, but also knows there was no way this particular character could.  But you can also fuck that up and be accused of artificially manipulating events for melodrama, like if a simple phone call could have cleared up the misunderstanding but you went to unbelievable lengths to prevent it even though both characters had cell phones and each others’ numbers.

If the only conceivable reason the call didn’t take place was because the author stuck their nose in to prevent it, you haven’t properly used or constructed  the context for the story.  On the other hand, perhaps there was an unavoidable reason one character lost their phone earlier in the story, which had sufficient connection to  other important plot events to be not  just an excuse to avoid the plot-killing phone-call.

The point being that as I said before, the  possible contexts for language or events are infinite.  The secret to good writing  lies in being able to judge which contexts are most relevant and making sure that your story functions reasonably within those contexts.  A really, super-out-of-the-way solution to a problem being ignored is obviously a lot more acceptable than ignoring the one staring you in the face.  Sure your character might be able to send a morse-code warning message by hacking the electrical grid and blinking the power to New York repeatedly.  But I suspect your readers would be more likely to call you out for solving the communication difficulty that way than for not solving it with the characters’ easily  reachable cell phone.

I mention the phone thing because currently, due to rapid technological progress, contexts are shifting far  more rapidly than they did in the past.  Plot structures honed for centuries based on a lack of easy long-range communication are much less serviceable as archetypes now that we have cell phones.  An author who grew up before the age of ubiquitous smart-phones for your seven-year-old is going to have a lot more trouble writing a believable contemporary YA romance than someone who is turning twenty-two in the next three months.  But even then, there’s a lack of context-verified, time-tested plot structures to base such a story on than a similar story set in the 50s.  Just imagine how different Romeo and Juliet would have been if they could have just sent a few quick texts.

In the past, the ability of the characters to communicate at all was a strong driver of plots.  These days, it’s far more likely that trustworthiness of communication will be a central plot point.  In the past, the possible speed of travel dictated the pacing of many events.  That’s  far less of an issue nowadays. More likely, it’s a question of if you missed your flight.  Although…  the increased speed of communication might make some plots more unlikely, but it does counteract to some extent the changes in travel speed.  It might be valuable for your own understanding and ability to manipulate context to look at some works in older settings and some works in newer ones and compare how the authors understanding of context increased or decreased the impact and suspension of disbelief for the story.

Everybody has some context for your 50s love story because they’ve been exposed to past media depicting it.  And a reader is less likely to criticize shoddy contextualizing in when they lack any firm context of their own.   Whereas of course an expert on horses is far more likely to find and be irritated by mistakes in your grooming and saddling scenes than a kid born 16 years ago is to criticize a baby-boomer’s portrayal of the 60s.

I’m going to end this post with a wish for more stories–both SpecFic and YA–more strongly contextualized in the world of the last 15 years.  There’s so little of it, if you’re gonna go by my high standards.

 

Tags: , , , , , ,

Machine “Translation” and What Words Mean in Context

One of the biggest commonly known flaws of mahcine translation is a computer’s inability to understand differing meaning in context.  After all, a machine doesn’t know what a “horse” is.  It knows that “caballo” has (roughly) the same meaning in Spanish as “horse” does in English.  But it doesn’t know what that meaning is.

And it certainly doesn’t know what it means when we say that someone has a “horse-face”(/”face like a horse”).

 

But humans can misunderstand meaning in context, too.  For example, if you don’t know how “machine translation” works, you’d think that machines could actually translate or produce translations.  You would be wrong.  What a human does to produce a translation is not the same as what a machine does to produce a “translation”.  That’s why machine and human translators make different mistakes when trying to render the original meaning in the new language.

 

A human brain converts words from the source language into meaning and the meaning back into words in the target language.  A computer converts words from the source language directly to words in the target language, creating a so-called “literal” translation.  A computer would suck at translating a novel, because the figures of speech that make prose (or poetry) what they are are incomprehensible to a machine.  Machine translation programs lack the deeply associated(inter-connected) knowledge base that humans use when producing and interpreting language.

 

A more realistic machine translation(MT) program would require an information web with connections between concepts, rather than words, such that the concept of horse would be related to the concepts of leg, mane, tail, rider, etc, without any intervening linguistic connection.

Imagine a net of concepts represented as data objects.  These are connected to each other in an enormously complex web.  Then, separately, you have a net of linguistic objects, such as words and grammatical patterns, which are overlaid on the concept net, and interconnected.  The objects representing the words for “horse” and “mane” would not have a connection, but the objects representing the concept of meaning underlying these words would have, perhaps, a “has-a” connection, also represented by a connection or “association” object.

In order to translate between languages like a human would, you need your program to have an approximation of human understanding.  A famous study suggested that in the brain of a human who knows about Lindsay Lohan, there’s an actual “Lindsay” neuron, which lights up whenever you think about Lindsay Lohan.  It’s probably lighting up right now as you read this post.  Similarly, in our theoretical machine translation program information “database”, you have a “horse” “neuron” represented by our concept object concept that I described above.  It’s separate from our linguistic object neuron which contains the idea of the word group “Lindsay Lohan”, though probably connected.

Whenever you dig the concept of horse or Lindsay Lohan from your long-term memory, your brain sort of primes the concept by loading it and related concepts into short-term memory, so your “rehab” neuron probably fires pretty soon after your Lindsay neuron.  Similarly, our translation program doesn’t keep it’s whole data-set in RAM constatnly, but loads it from whatever our storage medium is, based on what’s connected to our currently loaded portion of the web.

Current MT programs don’t translate like humans do.  No matter what tricks or algorithms they use, it’s all based on manipulating sequences of letters and basically doing math based on a set of equivalences such as “caballo” = “horse”.  Whether they do statistical analysis on corpuses of previously-translated phrases and sentences like Google Translate to find the most likely translation, or a straight0forward dictionary look-up one word at a time, they don’t understand what the text they are matching means in either language, and that’s why current approaches will never be able to compare to a reasonably competent human translator.

It’s also why current “artificial intelligence” programs will never achieve true human-like general intelligence.  So, even your best current chatbot has to use tricks like pretending to be a Ukranian teenager with bad English skills on AIM to pass the so-called Turing test.  A side-walk artist might draw a picture perfect crevasse that seems to plunge deep into the Earth below your feet.  But no matter how real it looks, your elevation isn’t going to change.  A bird can;t nest in a picture of tree, no matter how realistically depicted.

Calling what Google Translate does, or any machine “translation” program does translation has to be viewed in context, or else it’s quite misleading.  Language functions properly only in the proper context, and that’s something statistical approaches to machine translation will never be able to imitate, no matter how many billions of they spend on hardware or algorithm development.  Could you eventually get them to where they can probably usually mostly communicate the gist of a short newspaper article?  Sure.  Will you be able to engage live in witty reparte with your mutually-language exclusive acquaintance over Skype?  Probably not.  Not with the kind of system we have now.

Those crude, our theoretical program with knowledge web described above might take us a step closer, but even if we could perfect and polish it, we’re still a long way from truly useful translation or AI software.  After all, we don;t even understand how we do these things ourselves.  How could we create an artificial version when the natural one still eludes our grasp?

 

Tags: , , , , , , ,

Magic and Science and How Twins are Different People

Something that in my experience drives many (identical) twins crazy is how many people assume they look alike physically so they must be just alike in other ways.  Interests, hobbies, sexuality, gender, religion, whatever.  Twins may look the same superficially, but underneath they are as different as any two other people.  Or any non-twin siblings if you want to be pedantic about nature and nurture.

Fantasy and Science Fiction are like the Twins of Literature.  Whenever someone tries to talk about genre lines or the difference between science and magic, the same old shit gets trotted out.  Clarke’s Law and all that.  Someone recently left a comment on this very blog saying magic is just a stand-in for science.  My friend!  Boy do we have a lot to talk about today.  While it’s certainly true that magic can serve many of the same functions as science (or technology) in a story, the two are fundamentally different in both themselves and the uses to which they are most often put.  Sure they’re both blonde, but technology like red-heads, and magic is more into undercuts.

 

First, not to keep pushing the lie that science is cold and emotionless, but a prime use of science (not technology!) in literature is to influence the world through knowledge of the world’s own inner workings.  (Technology does not require knowledge in its use, often, but rather only in its construction.)  One of the major differences is that most (but not all) magic in stories requires knowledge to use it.  You have to know how the magic works, or what the secret words are.  Whereas tech is like flipping the light switch.  A great writer once said what makes it science fiction is that you can make the gadget and pass it to the average joe across the engineering bay and he can use it just fine, but magic requires a particular person.  I can pass out a million flame-throwers to the troops, but I can’t just pass you a fireball and expect you not to get burned.  That’s one aspect to look at, although these days, magitech and enchanted objects can certainly play the role of mundane technology fairly well.

Second, magic is about taking our inner workings and thought processes and imposing them on top of the universe’s own rule.  From this angle, what makes magic distinct from technology is that a magic conflict is about the inner struggle and the themes of the narrative and how they can be used to shape the world.  Certainly tech can play this role, twin to how magic can be made to act like tech.  But it’s much less common out in the real world of literature.

 

There are two kinds of magic system:  One is the explicit explanation of how the magic works according to the word of god(the author), and the other is a system that the characters inside the world, with their incomplete knowledge impose on top of the word of god system.  So this group uses gestures to cast spells, and this group reads a spellbook, but they are both manifestations of the same basic energy.

So magic is the power to impose our will on the world whereas science/technology is powerful through its understanding of the uncaring laws of the universe.

Then, of course, are the differences in terms of how authors use them in the narrative.  Magic has a closer connection, in my opinion, to the theme aspect of literature.  It can itself be a realization of the theme of a story.  Love conquers all as in Lily Potter protecting her infant son from the dark lord at the cost of her life.  Passion reflected in the powers of the fire mage.  Elemental magic gives a great example.  Look at the various associations popular between elementalists’ characters and the element they wield.  Cold and impersonal ice mages, loving and hippy-ish earth mages.  This analogical connection is much more difficult to achieve with technology.

 

There’s a lot of debate these days about “scientific” magic versus numinous magic, and whether or not magic must have rules or a system.  But even systematically designed magic is not the same as technology, though it can be made to play similar roles, such as solving a plot puzzle.  But think:  The tricks to magic puzzles are thematic or linguistic.  The Witch-king of Angmar is said to be undefeatable by any man.  The trick to his invulnerability is the ambiguity of the words of the prophecy.  One could argue that a woman is not a man, and therefore not restricted by the prophecy.  We have no idea how the “magic” behind the protection works on a theoretical basis.  Does it somehow check for Y-chromosomes?  But that’s not the point.  The thematic significance of the semantic ambiguity is more important.  In science fiction, it’s the underlying workings that matter.  Even if we don’t explain warp drive, there’s no theme or ambiguity involved.  It gets you there in such and such time and that’s it.  Or, in an STL universe, lightspeed is the limit and there’s no trick to get around it.

You can’t use science or technology the same way as Tolkien did with that prophecy nearly as easily.  Imagine magic is hammer, and science is a sword.  Sure I can put a nail in with the sword, but it’s a bitch and a half compared to just using a hammer.  Just because I can put in that nail with that sword, it doesn’t mean that sword is really a hammer.  Just because I can have magic that appears to follow a few discoverable and consistent rules to achieve varying but predictable effects doesn’t mean it’s the same thing as real-world science.  Maybe the moon always turns Allen into a werewolf on the 1st of the month, but I’ll be codgled if you can do the same thing with science.

Whether magic or science or both are most suited to your story or the other way around depends on your goals for that individual story.  Do you need magic or fantasy elements to really drive home your theme?  Do you need technology to get to the alien colony three stars down?  Magic can evaporate all the water in a six mile radius without frying every living thing around.  Science sure as hell can’t.  Not even far-future science that we can conceive of currently.  They can both dry a cup, although we’re wondering why you’re wasting your cosmic talents when you could just use a damn paper towel.

Science can dress up as magic and fool your third-grade substitute teacher, and science can dress up as magic and fool the local yokels in 13th century Germany.  But even if you put a wedding dress on a horse, it’s still a horse, and throwing hard science trappings onto a magic system doesn’t change it’s nature.

 

Tags: , , , , , , ,

June 2015 World-building Seminar: Technology and World-building

This is the third post in my mini-Seminar on Technology in World-building.  This mini-series follow a slightly different format than the standard-form Monthly Word-building Seminars.  So for this post, I’ll be covering how to match your World-building to your technology in terms of general concepts.

There are five things to consider when deciding what level of technology will reasonably support your narrative with the fewest plot-holes:

  1. You need to have a general idea of what is powering your technology.  Is it man-power?  Horse-power?  Do you have steam engines or electricity?  What natural resource is generating this energy?  You might have food for humans or animals, coal or gas for steam engines or internal combustion engines.  You could have nuclear power running a steam turbine to generate electricity.  Perhaps you have wind, geothermal, solar, tidal, or river power to run an electric generator.
  2. You need to know what support technologies and knowledge are required for your energy-production method.  Do you need metal-working?  What about physics?  Battery power requires knowledge of chemistry.  Electric power requires lots of advanced knowledge to be widely-spread.
  3. What other technologies are likely to have been discovered on the path to your most modern science, and are they obsolete, moribund, or still of practical value?  Most societies with war as a major component learned how to make edged weapons and armor.  Sword-forging techniques are incredibly innovative and complex.  When reliable fire-arms came along, however, they were soon obsolete.  Windmills and watermills were original created to grind grain or pump water.  But even though we have more advanced methods for that today, we were able to incorporate that knowledge into new technology, to generate electric power.  With the advent of computers and electronic mail stationary production is no longer strictly necessary.  Many people no longer rely on letters as a form of communication on a regular basis.  And yet some of the technology and systems involved still survive in diminished capacities.
  4. What technology does evidence suggest are developed alongside the technology you wish to use in your story?  Can or would a society evolve to ignore those technologies if they don’t work for your narrative structure?  Modern communication has invalidated many of our most popular narrative structures.  Yet many authors are so used to writing in those structures, they find increasingly creative and improbably ways to invalidate the new technology and thus create space for their narratives rather than figuring out how to create tension by designing new structures to incorporate modern realities..  Or they don’t have the technology exist at all.
  5. Relatedly, can you somehow justify your narrative or society despite the fact that the necessary combination of technologies is plausible based on your world-building or story events?  How many plot-holes can your story support if you can’t quite make things fit how you’d need them to?

In its simplest usage, technology is the list of possibilities available to you as an author to move your narrative along.  But it’s also a set of restrictions on what you can realistically accomplish within your narrative.

If you have cell-phones, then it’s a lot harder to plot a story where key pieces of information are kept from various characters due to narrative shenanigans.  Really any sort of spycraft story is gonna be very different with long-distance communication.  You can text pictures of important documents, for example, instead of having to break back out of the evil fortress.  Sneaking around is a lot harder with cameras and heat-sensors. It’s relatively easy to assume a false identity in the middle ages.  How are they going to fact-check you when you’re from a thousand miles away with shitty roads and a bandit problem?  In the modern world, it’s a lot harder to get along as a fake person, despite what you may see in the movies.  Communications technology has a huge impact on a society and its national identity, and we’ll be covering those effects in-depth in a later seminar.

Cheap, high-quality steel can lead to very advanced swords–way better than were available back in the day.  But!  They’ve probably been outmoded by guns by the time such steel is available.  But the implications as far as national security and expansion potential given a certain level of military tech relative to neighboring lands should be pretty obvious.

And all this is ignoring the possibility of magic mimicking advanced technology in a fantasy story.

 

It’s also important to keep in mind that available technology is going to have a strong influence on how your society is structured.  Your population density for cities increases sharply with easy transportation or aqueducts to supply water.  Concrete makes for cheap, durable housing for the masses.  Good building insulation increases the number of climates humans can have an advanced civilization in.  Road-building and ship-building technology mean increased trade, which leads to more wealth.  They also make it easier to hold together a large political region, because it’s easier for the ruler to communicate and enforce his will.

Medicine is also important.  Good medical technology lets people live longer, take greater risks, and increases population.  People aren’t dying all the time from plagues, and they’re living longer.  This can lead to more skilled workers, give a genius more time to pursue knowledge, and increase the amount of wealth held by individuals or families.

 

It’s impossible in a single post or even a series of posts to cover even every major question about how technology will influence the world and any narratives that occur in it.  Different environments or political situations are going to affect how important a given aspect of technology is.  Some paths of development on Earth are just accidents of history and can be deviated from.  But it’s still an important thing to consider.  In the fourth and final post in this series (for now), I’m going to examine Jurassic World and the ways in which technology both make the story possible, but also leave some pretty gaping plot holes.

 

Tags: , , , ,

Monthly World-building Seminar: Fantasy, Technology, and Occupation

Welcome back to Atsiko’s Monthly World-building Seminar!  Today, we’re going to be talking again about technology and its influence on society.  In fantasy literature.

In the spirit of my official seminar schedule, I wanted to talk about how the ideas in the previous post can be applied to the real world.

According to the MIT Technology Review and the National Bureau for Economic Research, income and wealth inequality are directly related to the way in which an increasingly technological culture advantages the technologically literate over any other group.  Which only makes sense.  However, it’s important to keep in mind that those most likely to be technologically literate are people who are already advantaged in a given economy/society.  And when new technology comes around, they are also the most likely to have the time and resources available to capitalize on the possibilities of that new technology.  And this has been true throughout history.  When a group of people has a smaller share of inherited wealth, and a technology comes along that eliminates or changes their field of employment drastically, they have less chance of being able to recover and find a place in the new incarnation of society.

The roots of this problem all go back to the concept of specialization of labor: As a society grows larger and more organized, individuals in the society begin to switch from being jacks of all trades to being the master of one.  With the advent of agriculture, the job of producing food could be handled by a smaller percentage of the population, leaving the rest of the society free to pursue other trades.  This leads to the development of trades, where the master passes on their skills and knowledge to the apprentice, and these traditions result in an increasing level of quality in those areas.  As trades become more specialized, it becomes increasingly harder for an individual to learn a new trade, whether by themselves or from a teacher.  Finally, trades become specialized enough, and important enough to the economy that they begin to gain prestige, the supply of those skilled in the trade is exceeded by the demand for their services, and they become economically more stable and lucrative.

 

And some day, a new technology is invented which makes some trade obsolete.  Or perhaps a step in the production of a good that was once important and labor-demanding becomes simpler and easier, eliminating the need for the people who specialized in that step begins to dry up.  Now they must either invest in learning a new skill or be relegated to the pool of laborers suited for only easy tasks which are little trouble to learn.  The suppl of workers is closer to or even exceeds the demand, and the individual faces a decrease in the standard of living.

Not every technology has this effect, but the more innovative and powerful the technology, the more likely it is.  And while new technologies make create new jobs and trades, they tend to be more specialized, and create fewer jobs than a job-killing technology may destroy.

 

The next step of the association between technology and inequality involves the way in which technology breeds complexity in a society.  As more technologies arise, and the limited population divides further between career paths, it becomes necessary to have more effective organization.  Now jobs in organization begin to diversify and increase in complexity.  Because some human beings are better at certain tasks than others, and because there are a limited number of slots in most fields, especially as increasing levels of technology make tasks in those fields more efficient, the ease of switching between careers decreases.  And even the mid-level careers go through this, such that high-level and mid-level careers both drop to low-level when their relevance is eliminated by technology.

Finally, as technology and organization increase in complexity and investment of time and knowledge, there is a class of job that only a tiny number of people are suited for, but for which workers are in high demand.  You can never eliminate all the low-level jobs.; human society will always have the equivalent of today’s “minimum wage” job, at least as far as fantasy is concerned, no matter what the level of technology or magic involved.  Supply and demand will always hold, and so as a result, the greater the level of technology, the wider the income gap is going to be, no matter how much that same technology may raise the floor.  And as this rise in demand for certain skills convinces more people to aim for those career tracks, the people in the middle shift more towards the top, and the economy sees more profit in those areas, increasing demand and also increasing inequality.  Those who would previously have been in the mid-level fields are now instead either hitting the tail-end of the upper-level, or failing out and losing their investment, dropping into the low-level fields.  You can read a bit more about the ideas of capital bias and skill bias here.

 

The easiest example of this today is Silicon Valley.  As noted in the MITTR article, Silicon Valley is one of the areas of the highest economic inequality in this US, being one of the centers of technological innovation and change.  But basically any American corporation can model these concepts.  Certain high demand fields, such as CEO or other management jobs are paid not according to merit, but rather based on competition between companies for an under-available commodity.

 

So technology leads both to a massive increase in the overall productivity of the society, but also to larger and larger amounts of inequality.  This is something that’s important to keep in mind when building your own fictional society, and it can lend a lot of verisimilitude to your world-building. And in our third and final post for this introductory mini-seminar, I’ll go into more detail about both realistic and reasonable applications of these concepts to fictional world.

 

Tags: , , , ,