RSS

Tag Archives: Science Fiction

Crisis Parks: Cultural and Financial Investments in the Future

Welcome to the first post in my Speculative Societies Column.

Today I want to look at a social science fiction solution to a large number of various social and environmental problems we’re currently experiencing in our societies across the globe. You may or may not be aware that I’m an American. Given that, I’ll be writing from an American perspective and using American politics, economics, and geography to craft this thought experiment. It could certainly be adapted to a story with any setting, but the particulars are going to change quite a bit depending on available unused land, geopolitical realities, etc.

The social and environmental problems we’ll be addressing with this hypothetical social and economic policy are immigration and migration, homelessness, increasing frequency and severity of natural disasters, and the current worker and skilled worker shortage in the United States as of 2022. You might wonder what could possibly address all of these problems in a single package. When I mentioned in my into post that these are social science fiction and somewhat larger than life solutions/responses to societal issues, I was gently suggesting that they are unlikely to ever be made reality. But, they are still plausible from an abstract practical perspective.

So, what is a Crisis Park? America has an enormous number of parks and other sites concerning national and natural history. Areas of land set apart for public use for recreational, educations, and cultural purposes. A Crisis Park is then an area of land, necessarily more improved on than a park like Yellowstone or Crater Lake, set aside for the purpose of crisis management and human displacement.

Let’s theorize six regional divisions of the United States. Other countries could have less or more, and like our six US divisions, they exact boundaries would be determined by human geography and natural impediments. Each one cooperates among its member states–or is administered by the Federal Government. It might be most interesting to propose a Federal Executive Department runs it: the Department of the Interior, perhaps, or the Department of Housing and Urban Development. Maybe we give it to FEMA. Or perhaps we propose our own new Department of Crisis Management, or some such.

Whatever the organization, they run our six Crisis Parks as hubs for various services, social and emergency. Our Parks would have temporary and transitional housing, say for a million people. They would have medical services, entertainment venues, shopping districts, schools. They would also have command centers for coordinating natural disaster response and relief, including local and regional police and fire departments in the affected areas, as well as their own reserve units. Perhaps the Western Section would have a heavy emphasis on forest fire response. The Southeastern Section might be tuned for hurricane and flood relief. The Great Plains Section might be particularly skilled with tornado recovery.

I think it’s pretty elf-explanatory for natural disaster response. But how does it address immigration and homelessness? Well, it provides central processing locations for refugees and migrants. It has built-in housing in the form of single family homes and apartment blocks. We aren’t shoving people in the Superdome, here, or putting kids in fenced enclosures in abandoned shopping malls. There’s government-run medical services. Which can double as teaching hospitals, providing a baseline standard of training for private hospitals and universities to compare themselves against.

You need staff to run these facilities. They create jobs for relocated homeless populations and refugees. They can offer certification in various trades required for their own maintenance. Not only do they create permanent jobs, they offer low-cost, affordable training for important fields which may be lacking enough skilled workers. Unlike private hospitals or plumbing firms or whatever, they can afford to invest in on-the-job training, don’t suffer from over-staffing when trainees comes in, etc. Former employees can easily parlay their skills into small businesses or jobs in the private sector.

Speaking of the private sector, although these facilities have public concessions and other shopping services, they can sell vendor contracts to staff or use their entertainment venues. Offer gigs and performances and grants for up-and-coming artists and entertainers. Local schools could use their sporting facilities, or contract with the federal government for trade school programs using the facilities.

If you think this sounds a bit like utopian socialism, you’re right on the money. The reality would obviously be more complex. But through public-private partnerships, these facilities could pay for a large portion of their own expenses. They could be part of larger networks providing affordable services to low-income families or for people on Medicaid and Medicare. They could help new citizens or refugees acclimate to American society and culture without being completely focused on where their next meal is coming from. Public universities based here could offer affordable college and learning opportunities to birth-right and naturalized US citizens.

In a country where private enterprise is fetishized and government programs have a bad name, the Crisis Parks run by the Department of Crisis Management could provide a basis of comparison against which citizens could measure the ethics, economics, and effectiveness of private companies, without eliminating private enterprise or leaving the government as the only option.

Now imagines a story in a secondary world setting where a government runs crisis parks or an equivalent service. Look at how I’ve connected the goals and barriers to the crisis park concept to American politics and geography. You want to have at least that much interaction between your speculative fiction premise and the world in which is resides. What would it suggest to you that our secondary world crisis parks are concerned with not stepping too hard on the toes of private businesses? What does it say that large dedicated facilities are considered necessary to deal with population changes due to mass migration? What relationship would a secondary world nation implementing crisis parks likely have with its natural environment?

All of the above are useful angles from which to consider a social science fiction premise. And you can inspire very cool story conflicts even without fancy tech or magic. Just imagine the changes in public transportation, highways, planes, and railroads required to support new centrally located population centers capable of housing up to two million people. How would this affect nearby cities? Local businesses?

There is of course more to this concept than described here. Details on various individual services and how they might work. Like any good story idea, it’s too large to fit in a single blog post, but hopefully there was enough of a picture to make my argument. Feel free to ask for details in the comments.

 

Tags: , , ,

The True Cost of Science

Following up on my last post linked to at the bottom of the page, today I’m gonna talk about the issue of requiring a “cost” for magic, and the hidden costs of technology.  I’m sure you know a bit about that second part in the real world, but I want to address it from both narrative and world-building perspectives.

https://twitter.com/Merc_Rustad/status/1023246501143883777

Again, not an attack on the opinions of this panel.  But, the “personal” cost of magic vs. the hidden cost of science is sorta the topic, and this tweet did inspire it.

The main reason that the cost of magic tends to be a personal one is because the function of magic so often tends to be to side-step the infrastructure so indispensable to science and technology.  When we use technology to solve a problem in a story, the world-building and pre-work that supports the tech is so often already implied and accounted for.  Sure, it costs me nothing to dial your cell phone.  But somebody had to invent the tech, build the cell towers, provide the electricity, drill for the oil to make the plastic, mine the gold and copper and process the silicon, etc.  And all of that took thousands of years of set-up on the part of millions if not billions of people from all over the world.

Whereas, if I telepath you in Fantasy Capital City #11 from Frozen Northern Fortress #2490, none of that work was required.  At most, maybe there was a breeding program or a magical experiment.  Maybe a few years of training me.  But you’re still short-cutting uncountable hours of effort that were required for me to text you on Earth.  And some magic is vastly more powerful on a per-second basis than telepathy.  That is, it’s effect on the physical world is enormous in comparison to me pathing you about the cute boy at the inn.

That’s why many people want magic to have a price.  Usually it’s a personal price, because there isn’t the societal infrastructure around to displace that cost to the ancestors or, as Merc so sharply notes above, the environment.  The cost is personal because there’s no structure to allow for other options.  And also because it plays powerfully into the themes of many fantasy works.  is the requirement that there even be a cost puritanical?  That depends, I guess.  Certainly a YA protag whose mom pays the phone bill isn’t expending any more personal effort to make a phone call.

But then, the requirement of all that infrastructure vastly limits what you can do with tech.  Whereas magic can do not only enormous stuff for seemingly no effort, but it can do things that normally would be considered impossible.  Such as throw pure fire at someone.  If Lvl. 3 Fireball is functionally equivalent to a grenade, does that negate the need for a cost to the spell?  Well, can I cast infinite Fireballs where I might only be able to carry six grenades?  Then maybe not.  Even if I have 20 incredibly advanced, complex tools that are carry-able on a tool belt or in a small backpack, I probably still can’t do even a hundredth of what a mediocre hedgemage in some settings can do with zero tools.

If I feel like the character can do literally anything with magic without having to do much prep beforehand, and without the labor of millennia of civilization to back them up, if might take some of the tension out of the story.  Can you substitute unbreakable rules to get around that freedom?  Certainly.  And most systems with a cost do.  But that can steal leave a lot of freedom to avoid the hard work it would otherwise take to get around a plot obstacle.

And finally, we have to look at the other obvious reason for putting a cost on magic, even if it’s only eventual exhaustion.  Every other thing we do or could do in a given situation in the real world has a personal cost.  It might be immediate, like physical exhaustion.  Or it might be more distant like having our phone shut off for not paying the bill.  So, if magic has no such cost, or physical.economic limit, you have to wonder what the point of doing anything the normal way would be.  And if you don’t ever have to do anything the normal way, it’s unlikely your culture and society would match so closely to societies whose entire reason for being the way they are is based on the limitations of “the normal way”.

So, in the end, it’s not that all magic must have a personal cost, and tech can’t.  It’s more that the way magic is used in most fantasy stories means that the easiest or almost only place the cost can fall is on the shoulders of the character.

But there are other ways to do.  Environmental ones, for example.  The cataclysmic mage storms of Mercedes Lackey.  Bacigalupi and Buckell’s The Alchemist, and The Executioness‘s brambles.  Or, for example, perhaps the power for magic comes from living things.  A mage might draw his power from a far distant tree.  Might kill an entire forest at no cost to himself.  Might collapse an empire by sucking dry its rivers and its wombs with her spells.  And at no cost except of course the enmity of those he robs of life, or of the neighbors who blame her for the similar catastrophe wrought upon them by her unknown colleague to the west.  Perhaps they crumble buildings by drawing on the power of “order” stored within its interlocking bricks.  Or maybe the radiation by-products from the spell energy pollutes the soil and the stones, leading to horrific mutations of wild-life that scour the country-side and poison the serfs with their own grain.  Or maybe, just maybe, it cracks the foundation of the heavens with its malignant vibrations and brings the angles toppling down like iron statues and through the crust of the world into hell.

So, as I’ve said before, it’s consequences to the actions of the characters that people want.  And often the easiest or most simplistic costs are personal ones.  But certainly, you could apply environmental costs.  Or narrative costs paid to other characters who don’t much care for the selfish mage’s behavior.  Or metaphysical costs to the order world or the purity of its souls.  Those costs are easily addressed and provided for when they mirror the costs familiar to use from our own use of technology.  But sometimes when were straying far from the realms of earthly happenings, interesting and appropriate costs become harder to work into the story in a way that doesn’t disrupt its progression.

Sure, the choice of a personal cost could be puritanical.  Or it could be efficient.  Or lazy.  But that’s not a flaw of our conception of magic; rather, it’s a flaw in the imagination of the individual author, and the sum of the flaws of all authors as a whole.

I’d love to sea some magic systems that lack a direct personal cost like years off your life, or the blood of your newborn brother.  And while we’re at it, give me some science fiction choices with personal costs.  Technology in our world certainly isn’t consequence free; just ask Marie Curie.  Anyone up for the challenge?

 

 

Tags: , , , , , , , , , , ,

Magic vs. Science; Function vs. Presentation

A. Merc Rustead recently live-tweeted a panel from Diversicon called: “Magic: Science or Witchdraft.  https://twitter.com/Merc_Rustad/status/1023252105627357184

 

And I’d like to expand a bit on this topic.  One of the issues apparently brought up during the panel was that science fiction has magic in it.  That is, FTL travel, say, or other “futuristic” technologies function like magic, despite being clothed in SF trappings.

However, I think this is a flawed argument.  Which brings me back to the title of this post.  Science and magic are often presented as diametrically opposed.  But that’s a bit of a simplification.  Some people might argue with some merit that science and magic in fiction are merely collections of tropes, and as you modify the collections to bring them closer in line with each other, the line between science and magic begins to blur.

But there are two axes of the distinction that can make this a much more precise discussion.  There’s function.  If the functions of magic and science are identical, are the two concepts really that different.  There’s presentation.  If I present a scientific concept as magic, cuddling up to Clarke’s Third Law, is it a distinction without a difference?

The problem with these discussions is that the conclusions really depend on how magic or science is used in a given narrative or set of narratives.  If I present you a magic system, and it looks and feels an awful lot like science, in that we have repeatable results to identical actions, and you can logically manipulate the rules to achieve effects that follow directly from those manipulations, is it magic or is it science?  Well, I might use tropes around this system that relate to science, and therefore you might argue it’s science.  But if I use tropes related to magic, does that mean it is magic?

What if I present you with a system that I treat as scientific but it doesn’t have direct parallels to earth sciences?  Can we really call that science when the common conceit of science fiction is that the science follows logically from an extrapolation of real scientific principles found in our world?  Or are all systems that incorporate some or entirely otherworldly principles and logic by definition magic?

Many people have argued that magic is magic precisely because it doesn’t follow a logical system of rules, and especially not rules known to the reader or that can have experimentally repeatable results.  Certainly you can take that approach to magic.  Although then one has to wonder how anyone can achieve anything useful narratively with it.

Plus, I think it would be really cool to see more unearthly sciences in fiction, so I don’t want everything that cant be rigorously extrapolated from “real” science to be declared magic.

And our last major question, why does it matter?  Well, for one thing, because the genres are marketed to different people, and so someone or a large group of someones might be very grumpy to receive a “science fiction” novel and then find it fits much more closely with their conception of a “fantasy” novel.  And that’s bad for marketing and sales.  People are and should be allowed to be deeply invested in the trappings of various genres, and so we need words to categorize and discuss those trappings in a way that results in people being able to know whether a given story will appeal to their interests.

So going back to my argument that I think it’s flawed to say “SF” includes “magic” because FTL travel isn’t yet possible.  My point is not that that perspective can’t be useful in discussing how to construct and analyze speculative fiction to help readers find books and help authors find readers.  But rather, regardless of whether FTL travel is no more likely to exist than fictional magic systems, it belongs squarely in the genre of science fiction if that’s where the author wants to place it.

Certainly you could have a fantasy novel whose conceit is that a mad magician created a device that transported his entire planet into another solar system and thus brought its inhabitants into conflict with the inhabitants of a native planet, and started a war fought on great short-range mythril-keeled metal warships that sail between worlds.  And for all intents and purposes, that device is a planetary hyperdrive.  But I think you’d have trouble marketing that as a purely science fiction or even space opera novel.  You might, with some effort, succeed in marketing it as that rickety sub-genre “sword and planet”.  It sounds like it would be a really fucking cool book.  Maybe Spelljammer RPG enthusiasts would buy it by the boatload.  Who knows.  But even though it has hyperdrive, it’s probably not viable as sci-fi in the modern market, nor would it be scientifically plausible given real world science.  Imaging trying to do the gravity and orbital calculations for the star-galleys or whatever.

If you’re unlikely to ever find hyperdrive in a fantasy novel, is there any value in arguing that it’s technically magic?  This post isn’t in any way intended an attack on the panelists from the twitter thread or their personal views.  I just found some of the comments useful jumping off points for things I’ve been trying to express for awhile.

 

Look forward to a follow-up post in a few days on the issue of “cost” of magic vs. the cost of science.  Both in terms of what it requires from the structure of a society, and why the emphasis on “cost” in the first place.

 

Tags: , , , , ,

Why Is A Picture Worth a Thousand Words? Information Density in Various Media

You’ve obviously heard the the phrase “a picture is worth a thousand words”, and you probably even have an idea why we say that.  But rarely do people delve deeply into the underlying reasons for this truth.  And those reasons can be incredibly useful to know.  They can tell you a lot about why we communicate they way we do, how art works, and why it’s so damn hard to get a decent novel adaption into theaters.

I’m going to be focusing mostly on that last complaint in this post, but what I’m talking about has all sorts of broad applications to things like good communication at work, how to tell a good story or joke, and how to function best in society.

So, there’s always complaints about how the book or the comic book, or whatever the original was is better than the movie.  Or the other way around.  And that’s because different artistic media have different strengths in terms of how they convey information.  There are two reasons for this:

  1. Humans have five “senses”.  Basically, there are five paths through which we receive information from the world outside our heads.  The most obvious one is sight, closely followed by sound.  Arguably, touch(which really involves multiple sub-senses, like heat and cold and pain) is the third most important sense, and, in general, taste and smell are battling it out for fourth place.  This is an issue of “kind”.
  2. The second reason has to do with what I’m calling information density.  Basically, how much information a sense can transmit to our brains in how much time.  This is an issue of “degree”.  Sight, at least form humans, probably has the highest information density.  It gives is the most information per unit of time.

So how does that effect the strengths of various media?  After all, both movies and text mostly enter our brain through sight.  You see what’s on the screen and what’s on the page.  And neither can directly transmit information about touch, smell, or taste.

The difference is in information density.  Movies can transmit visual information(and audio) directly to our brains.  But text has to be converted into visual imagery in the brain, and it also takes a lot of text to convey a single piece of visual information.

AI, in the form of image recognition software, is famously bad at captioning photos.  Not only does it do a crappy job of recognizing what is in a picture, but it does a crappy job of summarizing it in text.  But really, could a human do any better?  Sure, you are way better than a computer at recognizing a dog.  But what about captioning?  It takes you milliseconds at most to see a dog in the picture and figure out it is jumping to catch the frisbee.  You know that it’s a black lab, and that it’s in the woods, probably around 4 in the afternoon, and that it’s fall because there’s no leaves on the trees, and it must have rained because there are puddles everywhere, and that…

And now you’ve just spent several seconds at least reading my haphazard description.  A picture is worth a thousand words because it takes a relatively longer amount of time for me to portray the same information in a text description.  In fact, it’s probably impossibly for me to convey all the same information in text.  Just imagine trying to write out every single bit of information explicitly shown in a half-hour cartoon show in text.  It would probably take several novels’ worth of words, and take maybe even days to read.  No one would read that book.  But we have no problem watching TV shows and movies.

Now go back and imagine our poor AI program trying to figure out the important information in the photo of the dog and how to best express it in words.  Yikes.  But as a human, you might pretty quickly decide that “a dog catches a frisbee” adequately describes the image.  Still takes longer than just seeing a picture, but isn’t all that much time or effort.  But, you’re summarizing.  A picture cannot summarize and really has no reason to.  With text(words) you have to summarize.  There’s pretty much no way around it.  So you lose an enormous amount of detail.

So, movies can’t summarize, and books must summarize.  Those are two pretty different constrains on the media in question.  Now, imagine a a radio play.  It’s possible you’ve never heard one.  It’s not the same as an audiobook, despite communicating through the same sense(audio), and it has some serious advantages over books and audiobooks.  You don’t have to worry about conveying dialogue, or sound information because you can do that directly.  Emotion, accents, sound effects.  But of course you can convey visual information like a movie, and unlike in a book or an audiobook, it’s a lot more difficult to just summarize, because you’d have to have a narrator or have the characters include it in dialogue.  So raw text still has some serious advantages based on the conventions of the form.  Similarly, radio dramas/audio plays/pod casts and movies both have to break convention to include character thoughts in storytelling, while books don’t.

So, audio and television media have major advantages in their specific areas than text, but text is in general far more flexible in making up for any short-comings.  And, it can take advantage of the summary nature of the medium when there’s a lot of unnecessary information.  Plus, it can count on the reader to be used to filling in details with their imagination.

Film and radio can’t do that.  They can use montages, cuts, and voiceovers to try and imitate what text can do, but it’s never quite the same effect.  And while language might not limit your ability to understand or experience concepts you have no words for, the chosen medium absolutely influences how effective various story-telling techniques can be.

Consider, an enormous battle scene with lots of action is almost always going to be “better” in a visual medium, because most of the relevant information is audio and video information.  An action scene involving riding a dragon through an avalanche while multiple other people try to get out of the way or stop you involves a great deal of visual information, such that a text can’t convey everything a movie could.  Watching a tennis match is always going to be more exciting than reading about one, because seeing the events lets you decide without an narrator interference whether a player has a real shot at making a return off that amazing serve.  You can look at the ball, and using past experience, imagine yourself in the player’s place and get a feeling of just how impressive that lunging backhand really was.  You can’t do the same in text, because even if the writer could describe all the relevant information such that you could imagine the scene exactly in your head, doing so would kill the pacing because of how long reading that whole description would take.

The very best artists in any medium are always going to use that medium to its fullest, exploiting any tricks or hacks as best as possible to make their creation shine.  And that means they will (often unconsciously) create a story tailored to best take advantage of the medium they are working in.  If and when the time comes to change mediums, a lot of what really made the art work won’t be directly translatable because that other medium will have different strengths and have different “hacks” available to try to imitate actually experiencing events directly.  If you play videogames or make software, it’s sort of like how switching platforms or programming languages (porting the game) means some things that worked really well in the original game won’t work in the ported version, because the shortcut in the original programming language doesn’t exist in the new one.

So, if video media have such a drastically higher information density than text, how do really good authors get around these inherent shortcomings to write a book, say?  It’s all about understanding audience attention.  Say it again, “audience attention.”

While the ways you manipulate it are different in different media, the concept exists in all of them in some form.  The most obvious form is “perspective”, or the viewpoint from which the audience perceives the action.  In film, this generally refers to the camera, but there’s still the layer of who in the story the audience is watching.  Are we following the villain or the hero?  The criminal or the detective?

In film, the creator has the ability to include important visual information in a shot that’s actually focused on something else.  Because there’s no particular emphasis on a given object or person being included in the shot, things can easily be hidden in plain sight.  But in a book, where the author is obviously very carefully choosing what to include in the description in order to control pacing and be efficient with their description, it’s a lot harder to hide something that way.  “Chekov’s gun” is the principle that irrelevant information should not be included in the story.  “If there’s a rifle hanging on the wall in Act 1, it must be fired in Act 2 or 3.”  Readers will automatically pay attention to almost anything the author mentions because why mention it if it’s not relevant?

In a movie, on the other hand, there’s lots of visual and auditory filler because the conceit is that the audience is directly watching events as they actually happened, so a living room with no furniture would seem very odd, even if the cheap Walmart end table plays no significant role in the story.  Thus, the viewer isn’t paying particular attention to anything in the shot if the camera isn’t explicitly drawing their eye to it.  The hangar at the Rebel Base has to be full of fairly detailed fighter ships even if we only really care about the hero’s.  But not novel is going to go in-depth in its description of 30 X-wings that have no real individual bearing on the course of events.  They might say as little as “He slipped past the thirty other fighters in the hangar to get to the cockpit where he’d hidden the explosives.”  Maybe they won’t even specify a number.

So whereas a movie has an easy time hiding clues, a writer has to straddle the line between giving away the plot twist in the first 5 pages and making it seem like a deus ex machina that comes out of nowhere.  But hey, at least your production values for non-cheesy backgrounds and sets are next to nothing!  Silver linings.

To get back to the main point, the strengths of the medium to a greater or lesser extent decide what kind of stories can be best told, and so a gimmick that works well in a novel won’t necessarily work well in a movie.  The narrator who’s secretly a woman or black, or an alien.  Those are pretty simplistic examples, but hopefully they get the point across.

In the second part of this post a couple days from now, I’ll be talking about how what we learned here can help us understand both how to create a more vibrant image in the reader’s head, and why no amount of research is going to allow you to write about a place or culture or subject you haven’t really lived with for most of your life like a someone born to it would.

 

Tags: , , , , , , , , ,

YA and SFF: The Good Twin and the Bad Twin

So as I was scrolling through my Twitter feed today, I ran across a link to this article by Fonda Lee: The Case for YA Science Fiction.  Read the post before you continue.  I’ll wait…

Okay.  So, the gist of the post is that YA Fantasy novels have been selling like crazy.  There are several big name authors, including those mentioned in Lee’s post and many others.  I can tell you right now I’ve read most of the books put out by all of those authors in the YA Fantasy genre.  And so have millions of others.  They may not be as popular as dystopians, and they certainly don’t get as many movie deals.  But they move a lot of dead trees and digital trees.  I’ve been blogging and writing long enough to remember four or five rounds of “Will Science Fiction be the next big thing in YA?”  And the answer was always no.  There would be upticks and uptrends.  Several fantastic books would come out in a short period.  But nothing would ever really break into the big money or sales the way YA Fantasy often does.  It wouldn’t be blasted all over the blogosphere, or the writers forums, or the tip top of the best sellers lists.  Which is too bad, because science fiction has a lot of value to add to YA as a category, and it can address issues and do so in ways not available to other genres.

Lee mentions several notable YA SF novels that take on current events and other contemporary issues that are ripe for exploration: MT Anderson’s Feed is a fantastic look at the way social media has been taken over by advertisers looking to build monetizable consumer profiles, and the ending, without spoilers, takes a look at just how far they go in valuing those profiles over the actual humans behind them.  She mentions House of the Scorpion, which I didn’t care for, but which is still a very good novel on the subject of cloning.  Scott Westerfeld never gets credit for his amazing additions to the YA SF canon, with the steampunk Leviathan series and the dystopian Uglies series.

YA SF has a lot of unmined treasure to be found, and maybe it will have to focus a bit on near-future SF for awhile, to whet the appetite of YA readers.  Some of the hard SF tropes Lee discusses in her post kinda bore me, honestly.  And as a writer I feel like saying “it’s magic” is popular because it’s simpler.  There’s always a huge debate in adult SFF about whether the worldbuiding or science details really add enough to the story compared to the narrative effects of the speculative elements.  The social issues we are having as a world today are incredibly accessible fruit for a YA SF novel to harvest.  Social media, AI/big data, consumer profiles, technology in education.

I mean, I know 8-year-olds whose schools give out tablets to every student to take advantage of what tech in the classroom can offer.  My high school was getting SmartBoards in every classroom just a year after I left in the late 2000s.  But you never see any of this in YA books.  They often feel set no later than my sophomore year of high school given the technology and social issues involved.  Being a teenager will always be being a teenager, but the 80s and early 90s are waaaaaaaaaaaaayyy different than what young adults encounter in their general environment today.  Of course, to be SF you can’t just upgrade the setting to the present day.

You have to extrapolate out quite a bit further than that.  But given the environment today’s teens are living in, doing so while keeping the story interesting and relatable is so easy.  What’s the next big advance in social media?  How will smart houses/the internet of things impact the lives of young adults for better or worse?  How will the focus of education change as more and more things that you used to have to do in your head or learn by rote are made trivial by computers?  What social or political trends are emerging that might have big consequences in the lives of future teenagers?  How could an author explore those more intensely with element of science fiction than they could with a contemporary novel?

I definitely share Lee’s sense that YA “science fiction” grabs trappings to stand out from the crowd rather than being rooted inherently in the tropes of the genre.  It’s not uncommon for YA in general to play this game with various genre outfits, but sci-fi often seems the hardest hit.  That’s not a criticism of those books, but just pointing out it might give readers, writers, and publishers a false image of what SF really is and how YA can benefit from incorporating more of it.

As a reader, I’ve always dabbled in both the YA and Adult book cases.  And from that perspective, I wonder if the flavor of YA much of SF might be telling SF readers, teenaged or otherwise, that it’s just not the book(s) for them.

As a writer, I have lots of novel ideas that are YA and SF, and I’d like to explore them,and maybe even publish some of them one day.  But I do have to wonder, given the wide variety of stories building in my head, am I taking a risk with my career by writing in such a threadbare genre?  Perhaps others with similar plot ideas feel the same, and that’s why they aren’t submitting these ideas(books) to publishers?

 

Tags: , , , , ,

Do Androids Dream?

I’m here with some fascinating news, guys.  Philip K. Dick may have been joking with the title of his famous novel Do Androids Dream of Electric Sheep?  But science has recently answered this deep philosophical question for us.  In the affirmative.  The fabulous Janelle Shane trains neural networks on image recognition datasets with the goal of uncovering some incidental humour.  She’s taken this opportunity to answer a long-standing question in AI.  As it turns out, artificial neural networks do indeed dream of digital sheep.  Whether androids will too is a bit more difficult.  I’d hope we would improve our AI software a bit more before we start trying to create artifical humans.

As Shane explains in the above blog post, the neural network was trained on thousands or even millions (or more) of images, which were pre-tagged by humans for important features.  In this case, lush green fields and rocky mountains.  Also, sheep and goats.  After training, she tested it on images with and without sheep, and it turns out it’s surprisingly easy to confuse it.  It assumed sheep where there were none and missed sheep (and goats) staring it right in the face.  In the second case, it identified them as various other animals based on the other tags attached to images of them.  Dogs in your arms, birds in a tree cats in the kitchen.

This is where Shane and I come to a disagreement.  She suggests that the confusion is the result of insufficient context clues in the images.  That is, fur-like texture and a tree makes a bird, with a leash it makes a dog. In a field, a sheep.  They see a field, and expect sheep.  If there’s an over-abundance of sheep in the fields in the training data, it starts to expect sheep in all the fields.

But I wonder, what about the issue of paucity of tags.  Because of the way images are tagged, there’s not a lot of hint about what the tags are referring to.  Unlike more standard teaching examples, these images are very complex and there lots of things besides what the tags note.  I think the flaw is a lot deeper than Shane posits.   The AI doesn’t know how to recognize discrete objects like a human can.  Once you teach a human what a sheep is, they can recognize it in pretty much any context.  Even a weird one like a space-ship or a fridge magnet.  But a neural net isn’t sophisticated enough or, most generously, structured properly to understand what the word “sheep” is actually referring to.  It’s quite possible the method of tagging is directly interfering with the ANNs ability to understand what it’s intended to do.

The images are going to contain so much information, so many possible changing objects that each tag could refer to, that it might be matching “sheep” say to something entirely different from what a human would match it to.  “Fields” or “lush green” are easy to do.  If there’s a lot of green pixels, those are pretty likely, and because they take up a large portion of the information in the image, there’s less chance of false positives.

Because the network doesn’t actually form a concept of sheep, or determine what entire section of pixels makes up a sheep, it’s easily fooled.  It only has some measure by which it guesses at their presence or absence, probably a sort of texture as mentioned in Shane’s post.  So the pixels making up the wool might be the key to predicting a sheep, for example.  Of course, NNs can recognize lots of image data, such as lines, edges, curves, fills, etc.  But it’s not the same kind of recognition as a human, and it leaves AIs vulnerable to pranks, such as the sheep in funny places test.

I admit to over-simplifying my explanations of the technical aspects a bit.  I could go into a lecture about how NNs work in general and for image recognition, but it would be a bit long for this post, and in many cases, no one really knows, even the designers of a system, everything about how they make their decision.  It is possible to design or train them more transparently, but most people don’t.

But even poor design has its benefits, such as answering this long-standing question for us!

If anyone feels I’ve made any technical or logical errors in my analysis, I’d love to hear about it, insomuch as learning new things is always nice.

 

Tags: , , , , , , , ,

Your Chatbot Overlord Will See You Now

Science fiction authors consistently misunderstand the concept of AI.  So do AI researchers.  They misunderstand what it is, how it works, and most importantly how it will arise.  Terminator?  Nah.  The infinitely increasing complexity of the Internet?  Hell no.  A really advanced chatbot?  Not in a trillion years.

You see, you can’t get real AI with a program that sits around waiting for humans to tell it what to do.  AI cannot arise spontanteously from the internet, or a really complex military computer system or from even the most sophisticated natural language processing program.

The first mistake is the mistake Alan Turing made with his Turing test.  The same mistake the founder and competitors for the Loebner Prize have made.  The mistake being: language is not thought.  Despite the words you hear in your head as you speak, despite the slowly-growing verisimilitude of chatbot programs, language is and only ever has been the expression of thought and not thought itself.  After all, you can visualize a scene in your head without ever using a single word.  You can remember a sound or a smell or the taste of day-old Taco Bell.  All without using a single word.  A chatbot can never become an AI because it cannot actually think, it can only loosely mimic the linguistic expression of thought through tricks and rote memory of templates that if it’s really advanced may involve plugging in a couple variables taken from the user’s input.  Even chatbats based on neural networks and enormous amounts of training data like Microsoft’s Tay, or Siri/Alexa/Cortana are still just tricks of programming trying to eke out an extra tenth of a percentage point of illusory humanness.  Even IBM’s Watson is just faking it.

Let’s consider for a bit what human intelligence is to give you an idea of what the machines of today are lacking, and why most theories on AI are wrong.  We have language, or the expression of intelligence that so many AI programs are so intent on trying to mimic.  We also have emotions and internal drive, incredibly complex concepts that most current AI is not even close to understanding, much less implementing.  We have long-term and short-term memory, something that’s relatively easy for computers to do, although in a different way than humans–and which there has still been no significant progress on because everyone is so obsessed with neural networks and their ability to complete individual tasks something like 80% as well as a human.  A few, like AlphaGoZero, can actually crush humans into the ground on multiple related tasks–in AGZ’s case, chess-like boardgames.

These are all impressive feats of programming, though the opacity of neural-network black boxes kinda dulls the excitement.  It’s hard to improve reliably on something you don’t really understand.  But they still lack the one of the key ingredients for making a true AI: a way to simulate human thought.

Chatbots are one of two AI fields that focus far too much on expression over the underlying mental processes.  The second is natural language processing(NLP).  This includes such sub-fields as machine translation, sentiment analysis, question-answering, automatic document summarization, and various minor tasks like speech recognition and text-to-speech.  But NLP is little different from chatbots because they both focus on tricks that manipulate the surface expression while knowing relatively little about the conceptual framework underlying it.  That’s why Google Translate or whatever program you use will never be able to match a good human translator.  Real language competence requires understanding what the symbols mean, and not just shuffling them around with fancy pattern-recognition software and simplistic deep neural networks.

Which brings us to the second major lack in current AI research: emotion, sentiment, and preference.  A great deal of work has been done on mining text for sentiment analysis, but the computer is just taking human-tagged data and doing some calculations on it.  It still has no idea what emotions are and so it can only do keyword searches and similar and hope the average values give it a usable answer.  It can’t recognize indirect sentiment, irony, sarcasm, or other figurative language.  That’s why you can get Google Translate to ask where the toilet is, but its not gonna do so hot on a novel, much less poetry or humour.   Real translation is far more complex than matching words and applying some grammar rules, and Machine Translation(MT) can barely get that right 50% of the time.

So we’ve talked about thought vs. language, and the lack of emotional intelligence in current AI.  The third issue is something far more fundamental: drive, motivation, autonomy.  The current versions of AI are still just low-level software following a set of pre-programmed instructions.  They can learn new things if you funnel data through the training system.  They can do things if you tell them to.  They can even automatically repeat certain tasks with the right programming.  But they rely on human input to do their work.  They can’t function on their own, even if you leave the computer or server running.  They can’t make new decisions, or teach themselves new things without external intervention.

This is partially because they have no need.  As long as their machine “body” is powered they keep chugging along.  And they have no ability to affect whether or not it is powered.  They don’t even know they need power, for the most part.  Sure they can measure battery charge and engage sleep mode through the computer’s operating system.  But they have no idea why that’s important, and if I turn the power station off or just unplug the computer, a thousand years of battery life won’t help them plug back in.  Whereas human intelligence is based on the physical needs of the body motivating us to interact with the environment, a computer and the rudimentary “AI” we have now has no such motivation.  It can sit in its resting state for eternity.

Even with an external motivation, such as being coded to collect knowledge or to use robot arms to maintain the pre-designated structure of say a block pyramid or a water and sand table like you might see demonstrating erosion at the science center, an AI is not autonomous.  It’s still following a task given to it by a human.  Whereas no one told human intelligence how to make art or why it’s valuable.  Most animals don’t get it, either.  It’s something we developed on our own outside of the basic needs of survival.  Intelligence helps us survive, but because of it we need things to occupy our time in order to maintain mental health and a desire to live and pass on our genes.  There’s nothing to say that a complete lack of being able to be bored is a no-go for a machine intelligence, of course.  But the ability to conceive and implement new purposes in life is what make human intelligence different from that of animals, whose intelligence may have less raw power but still maintains the key element of motivation that current AI lacks, and which a chatbot or a neural network as we know them today can never achieve, no matter how many computers you give it to run on or TV scripts you give it to analyze.  The fundamental misapprehension of what intelligence is and does by the AI community means they will never achieve a truly intelligent machine or program.

Science fiction writers dodge this lack of understanding by ignoring the technical workings of AI and just making them act like strange humans.  They do a similar thing with alien natural/biological intelligences.  It makes them more interesting and allows them to be agents in our fiction.  But that agency is wallpaper over a completely nonexistent technological understanding of ourselves.  It mimics the expression of our own intelligence, but gives limited insight into the underlying processes of either form.  No “hard science fiction” approach does anything more than a “scientific magic system”.  It’s hard sci-fi because it has fixed rules with complex interactions from which the author builds a plot or a character, but it’s “soft sci-fi” in that these plots and characters have little to do with how AI would function in reality.  It’s the AI equivalent of hyperdrive.  A technology we have zero understanding of and which probably can’t even exist.

Elon Musk can whinge over the evils of unethical AI destroying the world, but that’s just another science fiction trope with zero evidential basis in reality.  We have no idea how an AI might behave towards humans because we still have zero understanding of what natural and artificial intelligences are and how they work.  Much less how the differences between the two would affect “inter-species” co-existence.  So your chatbot won’t be becoming the next HAL or Skynet any time soon, and your robot overlords are still a long way off even if they could exist at all.

 

Tags: , , , , , ,

AI, Academic Journals, and Obfuscation

A common complaint about the structure for publishing and distributing academic journals is that it is designed in such a way that it obfuscates and obscures the true bleeding edge of science and even the humanities.  Many an undergrad has complained about how they found a dozen sources for their paper, but that all but two of them were behind absurd paywalls.  Even after accounting for the subscriptions available to them through their school library.  One of the best arguments for the fallacy that information wants to be free is the way in which academic journals prevent the spread of potentially valuable information and make it very difficult for the indirect collaboration between multiple researchers that likely would lead to the fastest advances of our frontier of knowledge.

In the corporate world, there is the concept of the trade secret.  It’s basically a form of information that creates the value in the product or the lower cost of production a specific corporation which provides that corporation with a competitive edge over other companies in its field.  Although patents and trade secret laws provide incentive for companies to innovate and create new products, the way academic journals are operated hinders innovation and advancement without granting direct benefits to the people creating the actual new research. Rather, it benefits instead the publishing company whose profit is dependent on the exclusivity of the research, rather than the value of the research itself to spur scientific advancement and create innovation.

Besides the general science connection, this issue is relevant to a blog like the Chimney because of the way it relates to science fiction and the plausibility and/or obsolescence of the scientific  or world-building premise behind the story.

Many folks who work  in the hard sciences (or even the social sciences) have an advantage in the premise department, because they have knowledge and the ability to apply it at a level an amateur or  a generalist is unlikely to be able to replicate.  Thus, many generalists or plain-old writers who work in science fiction make use of a certain amount of handwavium in their scientific and technological world-building.  Two of the most common examples of this are in the areas of faster-than-light(FTL) travel (and space travel in general) and artificial intelligence.

I’d like to argue that there are three possible ways to deal with theoretical or futuristic technology in the premise of  an SF novel:

  1. To as much as possible research and include in your world-building and plotting the actual way in which a technology works and is used, or  the best possible guess based on current knowledge of how such a technology could likely work and be used.  This would include the possibility of having actual plot elements based on quirks inherent in a given implementation.  So if your FTL engine has some side-effect, then the world-building and the plot would both heavily incorporate that side-effect.  Perhaps some form of radiation with dangerous effects both dictates the design of your ships and the results of the radiation affecting humans dictates some aspect of the society that uses these engines (maybe in comparison to a society using another method?)  Here you are  firmly in “hard” SF territory and are trying to “predict the future” in some sense.
  2. To say fuck it and leave the mechanics of your ftl mysterious, but have it there to make possible some plot element, such as fast travel and interstellar empires.  You’ve got a worm-hole engine say, that allows your story, but you don’t delve into or completely ignore how such a device might cause your society to differ from the present  world.  The technology is a narrative vehicle rather than itself the reason for the story.  In (cinematic) Star Wars, for example, neither the Force nor hyper-drive are explained in any meaningful way, but they serve to make the story possible.
  3. A sort of mix between the two involves  obviously handwavium technology, but with a set of rules which serve to drive the story. While the second type is arguably not true speculative fiction, but just utilizes the trappings for drama’s sake, this type is speculative, but within a self-awarely unrealistic premise.

 

The first type of SF often suffers from becoming dated, as the theory is disproven, or a better alternative is found.  This also leads to a possible forth type, so-called retro-futurism, wherein an abandoned form of technology is taken beyond it’s historical application, such as with steampunk.

And therein lies a prime connection between our two topics:  A\a technology used in a story may already be dated without the author even knowing about it.  This could be because they came late to the trend  and haven’t caught on to it’s real-world successor; it could also be because an academic paywall or a company on the brink of releasing a new product has kept the advancement private from the layperson, which many authors are.

Readers may be surprised to find that there’s a very recent real-world example of this phenomenon: Artificial Intelligence.  Currently, someone outside the field but who may have read up on the “latest advances” for various reasons might be lead to believe that deep-learning, neural networks, and  statistical natural language processing are the precursors or even the prototype technologies that will bring about real general/human-like artificial intelligence, either  in the near or far future.

That can be forgiven pretty  easily, since the real precursor to AI is sitting behind a massive build-up of paywalls and corporate trade secrets.  While very keen individuals may have heard of the “memristor”, a sort of circuit capable of behavior  similar to a neuron, this is a hardware innovation.  There is  speculation that modified memristors might be able to closely model the activity of the brain.

But there is already a software solution: the content-agnostic relationship  mapping, analysis, formatting, and translation engine.  I doubt anyone reading this blog has ever heard of it.  I would indeed be surprised if anyone at Google or Microsoft had, either.  In fact, I only know it it by chance, myself. A friend I’ve been doing game design with on and off for the past few years told me about it while we were discussing the AI  model used in the HTML5 tactical-RPG Dark Medallion.

Content-agnostic relationship mapping is a sort of neuron simulation technology that permits a computer program to learn and categorize concept-models in a way that is similar to how humans do, and is basically the data-structure underlying  the software “stack”.  The “analysis” part refers to the system and algorithms used to review and perform calculations based on input from the outside world.  “Formatting” is the process of  turning the output of the system into intelligible communication–you might think of this as analogous to language production.  Just like human thoughts, the way this system “thinks” is not  necessarily all-verbal.  It can think in sensory input models just like a person: images, sounds, smells, tastes, and also combine these forms of data into complete “memories”.  “Translation” refers to the process of converting the stored information from the underlying relationship map into output mediums: pictures, text, spoken language, sounds.

“Content agnostic” means that the same data structures can store any type of content.  A sound, an image, a concept like “animal”: all of these can be stored in the same type of data structure, rather than say storing visual information as actual image files or sounds as audio files.  Text input is understood and stored in these same structures, so that the system does not merely analyze and regurgitate text-files like the current statistical language processing systems or use plug and play response templates like a chat-bot.  Further, the system is capable of output in any language it has learned, because the internal representations of knowledge are not stored in any one language such as English.  It’s not translation, but rather spontaneous generation of speech.

It’s debatable whether this system is truly intelligent/conscious, however.  It’s not going to act like a real human.  As far as I understand it, it possesses no driving spirit like a human, which might cause it to act on its own.  It merely responds to commands from a human.  But I suspect that such an advancement is not far away.

Nor is there an AI out there that can speak a thousand human languages and program new AIs, or write novels.  Not yet, anyway.  (Although apparently they’ve developed it to the point where it can read a short story and answer questions about it, like the names of the main characters or the setting. ) My friend categorized this technology as somewhere between an alpha release and a beta release, probably closer to alpha.

Personally, I’ll be impressed if they can just get it reliably answering questions/chatting in English and observably learning and integrating new things into its model of the world.  I saw some screenshots and a quick video of what I’ll call an fMRI equivalent, showing activation of the individual simulated “neurons”* and  of the entire “brain” during some low-level tests.  Wikipedia seems to be saying the technical term is “gray-box testing”, but since I have no formal software-design training, I can’t say if I’m mis-uderstanding that term or not.   Basically, they have zoomable view of the relationship map, and when the program is activating the various nodes, they light on the screen.   So, if you ask the system how many legs a cat has, the node for cat will light up, followed by the node for “legs”, and maybe the node for “possession”.  Possibly other nodes for related concepts, as well.  None of the images I saw actually labelled the nodes at the level of zoom shown, nor do I have a full understanding of how the technology works.  I couldn’t tell anyone enough for them to reproduce it, which I suppose is the point, given that if this really is a useable technique for creating AIs, it’s probably worth more than the blog-platform I’m writing this on or maybe even all of  Google.

 

Getting back to our original topic, while this technology certainly seemed impressive to me, it’s quite possible it’s just another garden path technology like I believe statistical natural language processing to be.  Science fiction books with clear ideas of how AI works will work are actually quite few and far between.  Asimov’s Three Laws, for example, are not about how robot brains work, but rather about  higher-level things like will AI want to harm us.  In light of what I’ve argued above, perhaps that’s the wisest course.  But then again, plenty of other fields  and technologies are elaborately described in SF stories, and these descriptions used to restrict and/or drive the plot and the actions of the characters.

If anyone does have any books recommendations that do get into the details of how AI works in the story’s world,I would love to read some.

 

Tags: , , , , , , , ,

Magic and Science and How Twins are Different People

Something that in my experience drives many (identical) twins crazy is how many people assume they look alike physically so they must be just alike in other ways.  Interests, hobbies, sexuality, gender, religion, whatever.  Twins may look the same superficially, but underneath they are as different as any two other people.  Or any non-twin siblings if you want to be pedantic about nature and nurture.

Fantasy and Science Fiction are like the Twins of Literature.  Whenever someone tries to talk about genre lines or the difference between science and magic, the same old shit gets trotted out.  Clarke’s Law and all that.  Someone recently left a comment on this very blog saying magic is just a stand-in for science.  My friend!  Boy do we have a lot to talk about today.  While it’s certainly true that magic can serve many of the same functions as science (or technology) in a story, the two are fundamentally different in both themselves and the uses to which they are most often put.  Sure they’re both blonde, but technology like red-heads, and magic is more into undercuts.

 

First, not to keep pushing the lie that science is cold and emotionless, but a prime use of science (not technology!) in literature is to influence the world through knowledge of the world’s own inner workings.  (Technology does not require knowledge in its use, often, but rather only in its construction.)  One of the major differences is that most (but not all) magic in stories requires knowledge to use it.  You have to know how the magic works, or what the secret words are.  Whereas tech is like flipping the light switch.  A great writer once said what makes it science fiction is that you can make the gadget and pass it to the average joe across the engineering bay and he can use it just fine, but magic requires a particular person.  I can pass out a million flame-throwers to the troops, but I can’t just pass you a fireball and expect you not to get burned.  That’s one aspect to look at, although these days, magitech and enchanted objects can certainly play the role of mundane technology fairly well.

Second, magic is about taking our inner workings and thought processes and imposing them on top of the universe’s own rule.  From this angle, what makes magic distinct from technology is that a magic conflict is about the inner struggle and the themes of the narrative and how they can be used to shape the world.  Certainly tech can play this role, twin to how magic can be made to act like tech.  But it’s much less common out in the real world of literature.

 

There are two kinds of magic system:  One is the explicit explanation of how the magic works according to the word of god(the author), and the other is a system that the characters inside the world, with their incomplete knowledge impose on top of the word of god system.  So this group uses gestures to cast spells, and this group reads a spellbook, but they are both manifestations of the same basic energy.

So magic is the power to impose our will on the world whereas science/technology is powerful through its understanding of the uncaring laws of the universe.

Then, of course, are the differences in terms of how authors use them in the narrative.  Magic has a closer connection, in my opinion, to the theme aspect of literature.  It can itself be a realization of the theme of a story.  Love conquers all as in Lily Potter protecting her infant son from the dark lord at the cost of her life.  Passion reflected in the powers of the fire mage.  Elemental magic gives a great example.  Look at the various associations popular between elementalists’ characters and the element they wield.  Cold and impersonal ice mages, loving and hippy-ish earth mages.  This analogical connection is much more difficult to achieve with technology.

 

There’s a lot of debate these days about “scientific” magic versus numinous magic, and whether or not magic must have rules or a system.  But even systematically designed magic is not the same as technology, though it can be made to play similar roles, such as solving a plot puzzle.  But think:  The tricks to magic puzzles are thematic or linguistic.  The Witch-king of Angmar is said to be undefeatable by any man.  The trick to his invulnerability is the ambiguity of the words of the prophecy.  One could argue that a woman is not a man, and therefore not restricted by the prophecy.  We have no idea how the “magic” behind the protection works on a theoretical basis.  Does it somehow check for Y-chromosomes?  But that’s not the point.  The thematic significance of the semantic ambiguity is more important.  In science fiction, it’s the underlying workings that matter.  Even if we don’t explain warp drive, there’s no theme or ambiguity involved.  It gets you there in such and such time and that’s it.  Or, in an STL universe, lightspeed is the limit and there’s no trick to get around it.

You can’t use science or technology the same way as Tolkien did with that prophecy nearly as easily.  Imagine magic is hammer, and science is a sword.  Sure I can put a nail in with the sword, but it’s a bitch and a half compared to just using a hammer.  Just because I can put in that nail with that sword, it doesn’t mean that sword is really a hammer.  Just because I can have magic that appears to follow a few discoverable and consistent rules to achieve varying but predictable effects doesn’t mean it’s the same thing as real-world science.  Maybe the moon always turns Allen into a werewolf on the 1st of the month, but I’ll be codgled if you can do the same thing with science.

Whether magic or science or both are most suited to your story or the other way around depends on your goals for that individual story.  Do you need magic or fantasy elements to really drive home your theme?  Do you need technology to get to the alien colony three stars down?  Magic can evaporate all the water in a six mile radius without frying every living thing around.  Science sure as hell can’t.  Not even far-future science that we can conceive of currently.  They can both dry a cup, although we’re wondering why you’re wasting your cosmic talents when you could just use a damn paper towel.

Science can dress up as magic and fool your third-grade substitute teacher, and science can dress up as magic and fool the local yokels in 13th century Germany.  But even if you put a wedding dress on a horse, it’s still a horse, and throwing hard science trappings onto a magic system doesn’t change it’s nature.

 

Tags: , , , , , , ,

Subgenre of the Week: Fairytale Fiction

Sub-genre of the Week: Fairytale Fantasy

Last week, I discussed Near-future SF.  This week, I’m going to talk about a newly re-popularized genre of fantasy: fairytale re-tellings.

Definition:

Fairytale fiction is a sub-genre of speculative that revolves around re-tellings of fairytales in new settings, with new characters, or from the perspective of a previously non-perspective character, and also fairytale style stories.

History

Fairytale retellings have been around for as long as there have been fairytales, but in the past decade or so, they’ve come together as a commercial genre.

Common Tropes and Conventions

The same as those for fairytales: secret royal birth, HEA endings, marriage into a royal family, something dangerous in the nearby woods, etc.

Genre Crossover

Fairytale fiction is unique among fantasy genres for generally having very little crossover.  The specifics of the stories usually preclude it.  It’s certainly possible to create high or epic fantasy out of fairytales, but people usually file off the serial numbers if they do so.

Media

Robin Hood has always been popular in film, and Snow White has just recently received multiple adaptions.  No doubt there will be more in the future.

Future Forecast

Fairytale fiction will no doubt continue to be popular for the near-future.  Although the most popular stories now have four or five major retellings, there are plenty of lesser known stories still awaiting a re-imagining.

Recommendations

1.  Enchanted series by Gail Carson Levine

2.  Lunar Chronicles series by Marissa Meyer

3.  Beastly by Alex Flinn

4.  Princess series by Jim C. Hines

5.  Rapunzel’s Revenge series by Shannon Hale

6.  Briar Rose by Jane Yolen

7.  Breadcrumbs by Anne Ursu

8.  Five Hundred Kingdoms series by Mercedes Lackey

9.  Beauty by Robin McKinley

10.  The Amazing Maurice and His Educated Rodents by Terry Pratchett

Goodreads list of Fairytale Fantasy

Next week: Cyberpunk

 
Leave a comment

Posted by on October 5, 2013 in genre, Genre of the Week

 

Tags: , , , , , , , , ,