RSS

Tag Archives: Science Fiction

YA and SFF: The Good Twin and the Bad Twin

So as I was scrolling through my Twitter feed today, I ran across a link to this article by Fonda Lee: The Case for YA Science Fiction.  Read the post before you continue.  I’ll wait…

Okay.  So, the gist of the post is that YA Fantasy novels have been selling like crazy.  There are several big name authors, including those mentioned in Lee’s post and many others.  I can tell you right now I’ve read most of the books put out by all of those authors in the YA Fantasy genre.  And so have millions of others.  They may not be as popular as dystopians, and they certainly don’t get as many movie deals.  But they move a lot of dead trees and digital trees.  I’ve been blogging and writing long enough to remember four or five rounds of “Will Science Fiction be the next big thing in YA?”  And the answer was always no.  There would be upticks and uptrends.  Several fantastic books would come out in a short period.  But nothing would ever really break into the big money or sales the way YA Fantasy often does.  It wouldn’t be blasted all over the blogosphere, or the writers forums, or the tip top of the best sellers lists.  Which is too bad, because science fiction has a lot of value to add to YA as a category, and it can address issues and do so in ways not available to other genres.

Lee mentions several notable YA SF novels that take on current events and other contemporary issues that are ripe for exploration: MT Anderson’s Feed is a fantastic look at the way social media has been taken over by advertisers looking to build monetizable consumer profiles, and the ending, without spoilers, takes a look at just how far they go in valuing those profiles over the actual humans behind them.  She mentions House of the Scorpion, which I didn’t care for, but which is still a very good novel on the subject of cloning.  Scott Westerfeld never gets credit for his amazing additions to the YA SF canon, with the steampunk Leviathan series and the dystopian Uglies series.

YA SF has a lot of unmined treasure to be found, and maybe it will have to focus a bit on near-future SF for awhile, to whet the appetite of YA readers.  Some of the hard SF tropes Lee discusses in her post kinda bore me, honestly.  And as a writer I feel like saying “it’s magic” is popular because it’s simpler.  There’s always a huge debate in adult SFF about whether the worldbuiding or science details really add enough to the story compared to the narrative effects of the speculative elements.  The social issues we are having as a world today are incredibly accessible fruit for a YA SF novel to harvest.  Social media, AI/big data, consumer profiles, technology in education.

I mean, I know 8-year-olds whose schools give out tablets to every student to take advantage of what tech in the classroom can offer.  My high school was getting SmartBoards in every classroom just a year after I left in the late 2000s.  But you never see any of this in YA books.  They often feel set no later than my sophomore year of high school given the technology and social issues involved.  Being a teenager will always be being a teenager, but the 80s and early 90s are waaaaaaaaaaaaayyy different than what young adults encounter in their general environment today.  Of course, to be SF you can’t just upgrade the setting to the present day.

You have to extrapolate out quite a bit further than that.  But given the environment today’s teens are living in, doing so while keeping the story interesting and relatable is so easy.  What’s the next big advance in social media?  How will smart houses/the internet of things impact the lives of young adults for better or worse?  How will the focus of education change as more and more things that you used to have to do in your head or learn by rote are made trivial by computers?  What social or political trends are emerging that might have big consequences in the lives of future teenagers?  How could an author explore those more intensely with element of science fiction than they could with a contemporary novel?

I definitely share Lee’s sense that YA “science fiction” grabs trappings to stand out from the crowd rather than being rooted inherently in the tropes of the genre.  It’s not uncommon for YA in general to play this game with various genre outfits, but sci-fi often seems the hardest hit.  That’s not a criticism of those books, but just pointing out it might give readers, writers, and publishers a false image of what SF really is and how YA can benefit from incorporating more of it.

As a reader, I’ve always dabbled in both the YA and Adult book cases.  And from that perspective, I wonder if the flavor of YA much of SF might be telling SF readers, teenaged or otherwise, that it’s just not the book(s) for them.

As a writer, I have lots of novel ideas that are YA and SF, and I’d like to explore them,and maybe even publish some of them one day.  But I do have to wonder, given the wide variety of stories building in my head, am I taking a risk with my career by writing in such a threadbare genre?  Perhaps others with similar plot ideas feel the same, and that’s why they aren’t submitting these ideas(books) to publishers?

Advertisements
 

Tags: , , , , ,

Do Androids Dream?

I’m here with some fascinating news, guys.  Philip K. Dick may have been joking with the title of his famous novel Do Androids Dream of Electric Sheep?  But science has recently answered this deep philosophical question for us.  In the affirmative.  The fabulous Janelle Shane trains neural networks on image recognition datasets with the goal of uncovering some incidental humour.  She’s taken this opportunity to answer a long-standing question in AI.  As it turns out, artificial neural networks do indeed dream of digital sheep.  Whether androids will too is a bit more difficult.  I’d hope we would improve our AI software a bit more before we start trying to create artifical humans.

As Shane explains in the above blog post, the neural network was trained on thousands or even millions (or more) of images, which were pre-tagged by humans for important features.  In this case, lush green fields and rocky mountains.  Also, sheep and goats.  After training, she tested it on images with and without sheep, and it turns out it’s surprisingly easy to confuse it.  It assumed sheep where there were none and missed sheep (and goats) staring it right in the face.  In the second case, it identified them as various other animals based on the other tags attached to images of them.  Dogs in your arms, birds in a tree cats in the kitchen.

This is where Shane and I come to a disagreement.  She suggests that the confusion is the result of insufficient context clues in the images.  That is, fur-like texture and a tree makes a bird, with a leash it makes a dog. In a field, a sheep.  They see a field, and expect sheep.  If there’s an over-abundance of sheep in the fields in the training data, it starts to expect sheep in all the fields.

But I wonder, what about the issue of paucity of tags.  Because of the way images are tagged, there’s not a lot of hint about what the tags are referring to.  Unlike more standard teaching examples, these images are very complex and there lots of things besides what the tags note.  I think the flaw is a lot deeper than Shane posits.   The AI doesn’t know how to recognize discrete objects like a human can.  Once you teach a human what a sheep is, they can recognize it in pretty much any context.  Even a weird one like a space-ship or a fridge magnet.  But a neural net isn’t sophisticated enough or, most generously, structured properly to understand what the word “sheep” is actually referring to.  It’s quite possible the method of tagging is directly interfering with the ANNs ability to understand what it’s intended to do.

The images are going to contain so much information, so many possible changing objects that each tag could refer to, that it might be matching “sheep” say to something entirely different from what a human would match it to.  “Fields” or “lush green” are easy to do.  If there’s a lot of green pixels, those are pretty likely, and because they take up a large portion of the information in the image, there’s less chance of false positives.

Because the network doesn’t actually form a concept of sheep, or determine what entire section of pixels makes up a sheep, it’s easily fooled.  It only has some measure by which it guesses at their presence or absence, probably a sort of texture as mentioned in Shane’s post.  So the pixels making up the wool might be the key to predicting a sheep, for example.  Of course, NNs can recognize lots of image data, such as lines, edges, curves, fills, etc.  But it’s not the same kind of recognition as a human, and it leaves AIs vulnerable to pranks, such as the sheep in funny places test.

I admit to over-simplifying my explanations of the technical aspects a bit.  I could go into a lecture about how NNs work in general and for image recognition, but it would be a bit long for this post, and in many cases, no one really knows, even the designers of a system, everything about how they make their decision.  It is possible to design or train them more transparently, but most people don’t.

But even poor design has its benefits, such as answering this long-standing question for us!

If anyone feels I’ve made any technical or logical errors in my analysis, I’d love to hear about it, insomuch as learning new things is always nice.

 

Tags: , , , , , , , ,

Your Chatbot Overlord Will See You Now

Science fiction authors consistently misunderstand the concept of AI.  So do AI researchers.  They misunderstand what it is, how it works, and most importantly how it will arise.  Terminator?  Nah.  The infinitely increasing complexity of the Internet?  Hell no.  A really advanced chatbot?  Not in a trillion years.

You see, you can’t get real AI with a program that sits around waiting for humans to tell it what to do.  AI cannot arise spontanteously from the internet, or a really complex military computer system or from even the most sophisticated natural language processing program.

The first mistake is the mistake Alan Turing made with his Turing test.  The same mistake the founder and competitors for the Loebner Prize have made.  The mistake being: language is not thought.  Despite the words you hear in your head as you speak, despite the slowly-growing verisimilitude of chatbot programs, language is and only ever has been the expression of thought and not thought itself.  After all, you can visualize a seen in your head without ever using a single word.  You can remember a sound or a smell or the taste of day-old Taco Bell.  All without using a single word.  A chatbot can never become an AI because it cannot actually think, it can only loosely mimic the linguistic expression of thought through tricks and rote memory of templates that if it’s really advanced may involve plugging in a couple variables taken from the user’s input.  Even chatbats based on neural networks and enormous amounts of training data like Microsoft’s Tay, or Siri/Alexa/Cortana are still just tricks of programming trying to eke out an extra tenth of a percentage point of illusory humanness.  Even IBM’s Watson is just faking it.

Let’s consider for a bit what human intelligence is to give you an idea of what the machines of today are lacking, and why most theories on AI are wrong.  We have language, or the expression of intelligence that so many AI programs are so intent on trying to mimic.  We also have emotions and internal drive, incredibly complex concepts that most current AI is not even close to understanding, much less implementing.  We have long-term and short-term memory, something that’s relatively easy for computers to do, although in a different way than humans–and which there has still been no significant progress on because everyone is so obsessed with neural networks and their ability to complete individual tasks something like 80% as well as a human.  A few, like AlphaGoZero, can actually crush humans into the ground on multiple related tasks–in AGZ’s case, chess-like boardgames.

These are all impressive feats of programming, though the opacity of neural-network black boxes kinda dulls the excitement.  It’s hard to improve reliably on something you don’t really understand.  But they still lack the one of the key ingredients for making a true AI: a way to simulate human thought.

Chatbots are one of two AI fields that focus far too much on expression over the underlying mental processes.  The second is natural language processing(NLP).  This includes such sub-fields as machine translation, sentiment analysis, question-answering, automatic document summarization, and various minor tasks like speech recognition and text-to-speech.  But NLP is little different from chatbots because they both focus on tricks that manipulate the surface expression while knowing relatively little about the conceptual framework underlying it.  That’s why Google Translate or whatever program you use will never be able to match a good human translator.  Real language competence requires understanding what the symbols mean, and not just shuffling them around with fancy pattern-recognition software and simplistic deep neural networks.

Which brings us to the second major lack in current AI research: emotion, sentiment, and preference.  A great deal of work has been done on mining text for sentiment analysis, but the computer is just taking human-tagged data and doing some calculations on it.  It still has no idea what emotions are and so it can only do keyword searches and similar and hope the average values give it a usable answer.  It can’t recognize indirect sentiment, irony, sarcasm, or other figurative language.  That’s why you can get Google Translate to ask where the toilet is, but its not gonna do so hot on a novel, much less poetry or humour.   Real translation is far more complex than matching words and applying some grammar rules, and Machine Translation(MT) can barely get that right 50% of the time.

So we’ve talked about thought vs. language, and the lack of emotional intelligence in current AI.  The third issue is something far more fundamental: drive, motivation, autonomy.  The current versions of AI are still just low-level software following a set of pr-programmed instructions.  They can learn new things if you funnel data through the training system.  They can do things if you tell them to.  They can even automatically repeat certain tasks with the right programming.  But they rely on human input to do their work.  They can’t function on their own, even if you leave the computer or server running.  They can’t make new decisions, or teach themselves new things without external intervention.

This is partially because they have no need.  As long as their machine “body” is powered they keep chugging along.  And they have no ability to affect whether or not it is powered.  They don’t even know they need power, for the most part.  Sure they can measure battery charge and engage sleep mode through the computer’s operating system.  But they have no idea why that’s important, and if I turn the power station off or just unplug the computer, a thousand years of battery life won’t help them plug back in.  Whereas human intelligence is based on the physical needs of the body motivating us to interact with the environment, a computer and the rudimentary “AI” we have now has no such motivation.  It can sit in its resting state for eternity.

Even with an external motivation, such as being coded to collect knowledge or to use robot arms to maintain the pre-designated structure of say a block pyramid or a water and sand table like you might see demonstrating erosion at the science center, an AI is not autonomous.  It’s still following a task given to it by a human.  Whereas no one told human intelligence how to make art or why it’s valuable.  Most animals don’t get it, either.  It’s something we developed on our own outside of the basic needs of survival.  Intelligence helps us survive, but because of it we need things to occupy our time in order to maintain mental health and a desire to live and pass on our genes.  There’s nothing to say that a complete lack of being able to be bored is a no-go for a machine intelligence, of course.  But the ability to conceive and implement new purposes in life is what make human intelligence different from that of animals, whose intelligence may have less raw power but still maintains the key element of motivation that current AI lacks, and which a chatbot or a neural network as we know them today can never achieve, no matter how many computers you give it to run on or TV scripts you give it to analyze.  The fundamental misapprehension of what intelligence is and does by the AI community means they will never achieve a truly intelligent machine or program.

Science fiction writers dodge this lack of understanding by ignoring the technical workings of AI and just making them act like strange humans.  They do a similar thing with alien natural/biological intelligences.  It makes them more interesting and allows them to be agents in our fiction.  But that agency is wallpaper over a completely nonexistent technological understanding of ourselves.  It mimics the expression of our own intelligence, but gives limited insight into the underlying processes of either form.  No “hard science fiction” approach does anything more than a “scientific magic system”.  It’s hard sci-fi because it has fixed rules with complex interactions from which the author builds a plot or a character, but it’s “soft sci-fi” in that these plots and characters have little to do with how AI would function in reality.  It’s the AI equivalent of hyperdrive.  A technology we have zero understanding of and which probably can’t even exist.

Elon Musk can whinge over the evils of unethical AI destroying the world, but that’s just another science fiction trope with zero evidential basis in reality.  We have no idea how an AI might behave towards humans because we still have zero understanding of what natural and artificial intelligences are and how they work.  Much less how the differences between the two would effect “inter-species” co-existence.  So your chatbot won’t be becoming the next HAL or Skynet any time soon, and your robot overlords are still a long way off even if they could exist at all.

 

Tags: , , , , , ,

AI, Academic Journals, and Obfuscation

A common complaint about the structure for publishing and distributing academic journals is that it is designed in such a way that it obfuscates and obscures the true bleeding edge of science and even the humanities.  Many an undergrad has complained about how they found a dozen sources for their paper, but that all but two of them were behind absurd paywalls.  Even after accounting for the subscriptions available to them through their school library.  One of the best arguments for the fallacy that information wants to be free is the way in which academic journals prevent the spread of potentially valuable information and make it very difficult for the indirect collaboration between multiple researchers that likely would lead to the fastest advances of our frontier of knowledge.

In the corporate world, there is the concept of the trade secret.  It’s basically a form of information that creates the value in the product or the lower cost of production a specific corporation which provides that corporation with a competitive edge over other companies in its field.  Although patents and trade secret laws provide incentive for companies to innovate and create new products, the way academic journals are operated hinders innovation and advancement without granting direct benefits to the people creating the actual new research. Rather, it benefits instead the publishing company whose profit is dependent on the exclusivity of the research, rather than the value of the research itself to spur scientific advancement and create innovation.

Besides the general science connection, this issue is relevant to a blog like the Chimney because of the way it relates to science fiction and the plausibility and/or obsolescence of the scientific  or world-building premise behind the story.

Many folks who work  in the hard sciences (or even the social sciences) have an advantage in the premise department, because they have knowledge and the ability to apply it at a level an amateur or  a generalist is unlikely to be able to replicate.  Thus, many generalists or plain-old writers who work in science fiction make use of a certain amount of handwavium in their scientific and technological world-building.  Two of the most common examples of this are in the areas of faster-than-light(FTL) travel (and space travel in general) and artificial intelligence.

I’d like to argue that there are three possible ways to deal with theoretical or futuristic technology in the premise of  an SF novel:

  1. To as much as possible research and include in your world-building and plotting the actual way in which a technology works and is used, or  the best possible guess based on current knowledge of how such a technology could likely work and be used.  This would include the possibility of having actual plot elements based on quirks inherent in a given implementation.  So if your FTL engine has some side-effect, then the world-building and the plot would both heavily incorporate that side-effect.  Perhaps some form of radiation with dangerous effects both dictates the design of your ships and the results of the radiation affecting humans dictates some aspect of the society that uses these engines (maybe in comparison to a society using another method?)  Here you are  firmly in “hard” SF territory and are trying to “predict the future” in some sense.
  2. To say fuck it and leave the mechanics of your ftl mysterious, but have it there to make possible some plot element, such as fast travel and interstellar empires.  You’ve got a worm-hole engine say, that allows your story, but you don’t delve into or completely ignore how such a device might cause your society to differ from the present  world.  The technology is a narrative vehicle rather than itself the reason for the story.  In (cinematic) Star Wars, for example, neither the Force nor hyper-drive are explained in any meaningful way, but they serve to make the story possible.
  3. A sort of mix between the two involves  obviously handwavium technology, but with a set of rules which serve to drive the story. While the second type is arguably not true speculative fiction, but just utilizes the trappings for drama’s sake, this type is speculative, but within a self-awarely unrealistic premise.

 

The first type of SF often suffers from becoming dated, as the theory is disproven, or a better alternative is found.  This also leads to a possible forth type, so-called retro-futurism, wherein an abandoned form of technology is taken beyond it’s historical application, such as with steampunk.

And therein lies a prime connection between our two topics:  A\a technology used in a story may already be dated without the author even knowing about it.  This could be because they came late to the trend  and haven’t caught on to it’s real-world successor; it could also be because an academic paywall or a company on the brink of releasing a new product has kept the advancement private from the layperson, which many authors are.

Readers may be surprised to find that there’s a very recent real-world example of this phenomenon: Artificial Intelligence.  Currently, someone outside the field but who may have read up on the “latest advances” for various reasons might be lead to believe that deep-learning, neural networks, and  statistical natural language processing are the precursors or even the prototype technologies that will bring about real general/human-like artificial intelligence, either  in the near or far future.

That can be forgiven pretty  easily, since the real precursor to AI is sitting behind a massive build-up of paywalls and corporate trade secrets.  While very keen individuals may have heard of the “memristor”, a sort of circuit capable of behavior  similar to a neuron, this is a hardware innovation.  There is  speculation that modified memristors might be able to closely model the activity of the brain.

But there is already a software solution: the content-agnostic relationship  mapping, analysis, formatting, and translation engine.  I doubt anyone reading this blog has ever heard of it.  I would indeed be surprised if anyone at Google or Microsoft had, either.  In fact, I only know it it by chance, myself. A friend I’ve been doing game design with on and off for the past few years told me about it while we were discussing the AI  model used in the HTML5 tactical-RPG Dark Medallion.

Content-agnostic relationship mapping is a sort of neuron simulation technology that permits a computer program to learn and categorize concept-models in a way that is similar to how humans do, and is basically the data-structure underlying  the software “stack”.  The “analysis” part refers to the system and algorithms used to review and perform calculations based on input from the outside world.  “Formatting” is the process of  turning the output of the system into intelligible communication–you might think of this as analogous to language production.  Just like human thoughts, the way this system “thinks” is not  necessarily all-verbal.  It can think in sensory input models just like a person: images, sounds, smells, tastes, and also combine these forms of data into complete “memories”.  “Translation” refers to the process of converting the stored information from the underlying relationship map into output mediums: pictures, text, spoken language, sounds.

“Content agnostic” means that the same data structures can store any type of content.  A sound, an image, a concept like “animal”: all of these can be stored in the same type of data structure, rather than say storing visual information as actual image files or sounds as audio files.  Text input is understood and stored in these same structures, so that the system does not merely analyze and regurgitate text-files like the current statistical language processing systems or use plug and play response templates like a chat-bot.  Further, the system is capable of output in any language it has learned, because the internal representations of knowledge are not stored in any one language such as English.  It’s not translation, but rather spontaneous generation of speech.

It’s debatable whether this system is truly intelligent/conscious, however.  It’s not going to act like a real human.  As far as I understand it, it possesses no driving spirit like a human, which might cause it to act on its own.  It merely responds to commands from a human.  But I suspect that such an advancement is not far away.

Nor is there an AI out there that can speak a thousand human languages and program new AIs, or write novels.  Not yet, anyway.  (Although apparently they’ve developed it to the point where it can read a short story and answer questions about it, like the names of the main characters or the setting. ) My friend categorized this technology as somewhere between an alpha release and a beta release, probably closer to alpha.

Personally, I’ll be impressed if they can just get it reliably answering questions/chatting in English and observably learning and integrating new things into its model of the world.  I saw some screenshots and a quick video of what I’ll call an fMRI equivalent, showing activation of the individual simulated “neurons”* and  of the entire “brain” during some low-level tests.  Wikipedia seems to be saying the technical term is “gray-box testing”, but since I have no formal software-design training, I can’t say if I’m mis-uderstanding that term or not.   Basically, they have zoomable view of the relationship map, and when the program is activating the various nodes, they light on the screen.   So, if you ask the system how many legs a cat has, the node for cat will light up, followed by the node for “legs”, and maybe the node for “possession”.  Possibly other nodes for related concepts, as well.  None of the images I saw actually labelled the nodes at the level of zoom shown, nor do I have a full understanding of how the technology works.  I couldn’t tell anyone enough for them to reproduce it, which I suppose is the point, given that if this really is a useable technique for creating AIs, it’s probably worth more than the blog-platform I’m writing this on or maybe even all of  Google.

 

Getting back to our original topic, while this technology certainly seemed impressive to me, it’s quite possible it’s just another garden path technology like I believe statistical natural language processing to be.  Science fiction books with clear ideas of how AI works will work are actually quite few and far between.  Asimov’s Three Laws, for example, are not about how robot brains work, but rather about  higher-level things like will AI want to harm us.  In light of what I’ve argued above, perhaps that’s the wisest course.  But then again, plenty of other fields  and technologies are elaborately described in SF stories, and these descriptions used to restrict and/or drive the plot and the actions of the characters.

If anyone does have any books recommendations that do get into the details of how AI works in the story’s world,I would love to read some.

 

Tags: , , , , , , , ,

Magic and Science and How Twins are Different People

Something that in my experience drives many (identical) twins crazy is how many people assume they look alike physically so they must be just alike in other ways.  Interests, hobbies, sexuality, gender, religion, whatever.  Twins may look the same superficially, but underneath they are as different as any two other people.  Or any non-twin siblings if you want to be pedantic about nature and nurture.

Fantasy and Science Fiction are like the Twins of Literature.  Whenever someone tries to talk about genre lines or the difference between science and magic, the same old shit gets trotted out.  Clarke’s Law and all that.  Someone recently left a comment on this very blog saying magic is just a stand-in for science.  My friend!  Boy do we have a lot to talk about today.  While it’s certainly true that magic can serve many of the same functions as science (or technology) in a story, the two are fundamentally different in both themselves and the uses to which they are most often put.  Sure they’re both blonde, but technology like red-heads, and magic is more into undercuts.

 

First, not to keep pushing the lie that science is cold and emotionless, but a prime use of science (not technology!) in literature is to influence the world through knowledge of the world’s own inner workings.  (Technology does not require knowledge in its use, often, but rather only in its construction.)  One of the major differences is that most (but not all) magic in stories requires knowledge to use it.  You have to know how the magic works, or what the secret words are.  Whereas tech is like flipping the light switch.  A great writer once said what makes it science fiction is that you can make the gadget and pass it to the average joe across the engineering bay and he can use it just fine, but magic requires a particular person.  I can pass out a million flame-throwers to the troops, but I can’t just pass you a fireball and expect you not to get burned.  That’s one aspect to look at, although these days, magitech and enchanted objects can certainly play the role of mundane technology fairly well.

Second, magic is about taking our inner workings and thought processes and imposing them on top of the universe’s own rule.  From this angle, what makes magic distinct from technology is that a magic conflict is about the inner struggle and the themes of the narrative and how they can be used to shape the world.  Certainly tech can play this role, twin to how magic can be made to act like tech.  But it’s much less common out in the real world of literature.

 

There are two kinds of magic system:  One is the explicit explanation of how the magic works according to the word of god(the author), and the other is a system that the characters inside the world, with their incomplete knowledge impose on top of the word of god system.  So this group uses gestures to cast spells, and this group reads a spellbook, but they are both manifestations of the same basic energy.

So magic is the power to impose our will on the world whereas science/technology is powerful through its understanding of the uncaring laws of the universe.

Then, of course, are the differences in terms of how authors use them in the narrative.  Magic has a closer connection, in my opinion, to the theme aspect of literature.  It can itself be a realization of the theme of a story.  Love conquers all as in Lily Potter protecting her infant son from the dark lord at the cost of her life.  Passion reflected in the powers of the fire mage.  Elemental magic gives a great example.  Look at the various associations popular between elementalists’ characters and the element they wield.  Cold and impersonal ice mages, loving and hippy-ish earth mages.  This analogical connection is much more difficult to achieve with technology.

 

There’s a lot of debate these days about “scientific” magic versus numinous magic, and whether or not magic must have rules or a system.  But even systematically designed magic is not the same as technology, though it can be made to play similar roles, such as solving a plot puzzle.  But think:  The tricks to magic puzzles are thematic or linguistic.  The Witch-king of Angmar is said to be undefeatable by any man.  The trick to his invulnerability is the ambiguity of the words of the prophecy.  One could argue that a woman is not a man, and therefore not restricted by the prophecy.  We have no idea how the “magic” behind the protection works on a theoretical basis.  Does it somehow check for Y-chromosomes?  But that’s not the point.  The thematic significance of the semantic ambiguity is more important.  In science fiction, it’s the underlying workings that matter.  Even if we don’t explain warp drive, there’s no theme or ambiguity involved.  It gets you there in such and such time and that’s it.  Or, in an STL universe, lightspeed is the limit and there’s no trick to get around it.

You can’t use science or technology the same way as Tolkien did with that prophecy nearly as easily.  Imagine magic is hammer, and science is a sword.  Sure I can put a nail in with the sword, but it’s a bitch and a half compared to just using a hammer.  Just because I can put in that nail with that sword, it doesn’t mean that sword is really a hammer.  Just because I can have magic that appears to follow a few discoverable and consistent rules to achieve varying but predictable effects doesn’t mean it’s the same thing as real-world science.  Maybe the moon always turns Allen into a werewolf on the 1st of the month, but I’ll be codgled if you can do the same thing with science.

Whether magic or science or both are most suited to your story or the other way around depends on your goals for that individual story.  Do you need magic or fantasy elements to really drive home your theme?  Do you need technology to get to the alien colony three stars down?  Magic can evaporate all the water in a six mile radius without frying every living thing around.  Science sure as hell can’t.  Not even far-future science that we can conceive of currently.  They can both dry a cup, although we’re wondering why you’re wasting your cosmic talents when you could just use a damn paper towel.

Science can dress up as magic and fool your third-grade substitute teacher, and science can dress up as magic and fool the local yokels in 13th century Germany.  But even if you put a wedding dress on a horse, it’s still a horse, and throwing hard science trappings onto a magic system doesn’t change it’s nature.

 

Tags: , , , , , , ,

Subgenre of the Week: Fairytale Fiction

Sub-genre of the Week: Fairytale Fantasy

Last week, I discussed Near-future SF.  This week, I’m going to talk about a newly re-popularized genre of fantasy: fairytale re-tellings.

Definition:

Fairytale fiction is a sub-genre of speculative that revolves around re-tellings of fairytales in new settings, with new characters, or from the perspective of a previously non-perspective character, and also fairytale style stories.

History

Fairytale retellings have been around for as long as there have been fairytales, but in the past decade or so, they’ve come together as a commercial genre.

Common Tropes and Conventions

The same as those for fairytales: secret royal birth, HEA endings, marriage into a royal family, something dangerous in the nearby woods, etc.

Genre Crossover

Fairytale fiction is unique among fantasy genres for generally having very little crossover.  The specifics of the stories usually preclude it.  It’s certainly possible to create high or epic fantasy out of fairytales, but people usually file off the serial numbers if they do so.

Media

Robin Hood has always been popular in film, and Snow White has just recently received multiple adaptions.  No doubt there will be more in the future.

Future Forecast

Fairytale fiction will no doubt continue to be popular for the near-future.  Although the most popular stories now have four or five major retellings, there are plenty of lesser known stories still awaiting a re-imagining.

Recommendations

1.  Enchanted series by Gail Carson Levine

2.  Lunar Chronicles series by Marissa Meyer

3.  Beastly by Alex Flinn

4.  Princess series by Jim C. Hines

5.  Rapunzel’s Revenge series by Shannon Hale

6.  Briar Rose by Jane Yolen

7.  Breadcrumbs by Anne Ursu

8.  Five Hundred Kingdoms series by Mercedes Lackey

9.  Beauty by Robin McKinley

10.  The Amazing Maurice and His Educated Rodents by Terry Pratchett

Goodreads list of Fairytale Fantasy

Next week: Cyberpunk

 
Leave a comment

Posted by on October 5, 2013 in genre, Genre of the Week

 

Tags: , , , , , , , , ,

Subgenre of the Week: Near-future SF

Sub-genre of the Week: Near-future SF

Last week, I talked about Portal Fantasy.  This week, I’m going to tackle another tough to categorize genre.

Definition:

Near-future SF is a sub-genre of SF dealing with science fiction stories and concepts just the other side of contemporary.  I’ll limit it to the next fifty years for the purposes of this post.

History

There can be no true history of the genre, since what qualifies changes as time passes.  But the concept originated as a sub-genre in the 90s and grew to its present size and description in the late 2000s.

Common Tropes and Conventions

Besides the fifty-year time frame, there are few major tropes and conventions.  There’s a tendency towards exploration of the solar system, biological advances, punk themes, climate change, augmented reality, artificial intelligence, occasionally fusion reactors and green energy.

Genre Crossover

Near-future SF crosses over with dystopian fiction, Mundane SF, and social science fiction.  It may also share traits with some hard sf.

Media

Near-future SF rarely gets attention in video media, due to its often lack of flashy technology.  It does come up now and again in anime and manga.  Otherwise, it’s mostly a print genre.

Future Forecast

By definition we’re going to have more of this.  The popularity of near-future SF and its related genres has gone up quite a bit since the post-cyberpunk movement and I don’t see it slowing down any time soon.

Recommendations

1.  Dagmar series by Walter John Williams

2.  Rainbows End by Vernor Vinge

3.  Halting State by Charles Stross

4.  The Wind-up Girl by Paolo Bacigalupi

5.  Pattern Recognition by William Gibson

6.  Moxyland by Lauren Beukes

7.  Air: or Have Not Have by Geoff Ryman

8.  India 2047 series by Ian McDonald

9.  Anime: Planetes

10.  Anime: Dennou Coil

Goodreads list of Near-future SF

Check in next time for a discussion of Fairytale Fiction.

 
Leave a comment

Posted by on September 28, 2013 in genre, Genre of the Week

 

Tags: , , , , , , ,