Human Conception: From Reality to Narrative

Psychology textbooks like to talk about the idea of “roles”: gender roles, professional roles, class roles, etc.  This is merely one instance of the greater process of human understanding.

Premise:  Reality is infinite and almost infinitely complex.

Premise:  Human beings–and their brains/processing power–are finite.

Question: So how do humans manage to interact with and understand the world?

A human being takes a subset of reality, and creates a rule from it.  A system of rules for a given topic becomes a model.  A group of models is understood through a narrative.  Our conception of the world, both physically ad intellectually, is comprised of a series of narratives.

Similarly, when we consider ourselves, there is a process of understanding–

Person -> Perceptions -> Roles -> Ideals -> Narratives -> Identity

–where “person” is a reality whose totality we cannot completely comprehend. When we consider others, we trade out the idea of Identity with the idea of a Label.  Now, a person can have many labels and many identities depending on context.

This goes back to the premise that we cannot understand everything all at the same time.

It is possible to move from the Label/Identity layer down into narratives, roles, and perceptions.  But no matter how low we go, we can never understand the totality, and this is where we run into the problem of false roles, false narratives, and false labels.  The vast majority of our conceptions of other people are flawed, and the other person would probably disagree with a large portion of them.  And so we have misunderstandings do to our inability to completely conceive of the totality of a person (or the world).


So, we take the facts we have and try to find what’s called a “best fit” case.  When you graph trends in statistics, you draw a line through your data points that best  approximates the average location of the points.  The same is true when we judge others, no matter on what axis we are judging them.  We look at our system of roles, ideas, and narratives, and try to find the set of them that most closely fits our perceptions of the person in question.  Then, we construct our idea of their identity from that best fit.  In this way, we warp (slightly or egregiously) the unknowable totality of reality as we experience it to fit a narrative.  Because our system for understanding and interacting with the universe is only capable of so much, we reduce reality down to something it feels like our system can handle.

The reason that certain character archetypes and narrative trajectories are so popular is because they match the most easily understandable roles and narratives.  Good vs. evil is easy for our  simplified system to handle.  It’s much harder to judge and therefore arrive at an “appropriate” emotional response to grey morality.  Because humans and the cultural sea in which we swim impose a localized best “best fit” on our collective consciousness, as writers we can learn about these best fits and cleave to or  subvert them for our own purposes in our writing.  We can pick where to deviate in order to focus our attention and our chances of successfully getting across our meaning.  Just as we can only handle a certain complexity in understanding reality, we are limited in our ability to deviate from the norm successfully.  Thus the  commonly re-quoted “You get one big lie” in regards to maintaining suspension of disbelief.


Tags: , , , ,

AI, Academic Journals, and Obfuscation

A common complaint about the structure for publishing and distributing academic journals is that it is designed in such a way that it obfuscates and obscures the true bleeding edge of science and even the humanities.  Many an undergrad has complained about how they found a dozen sources for their paper, but that all but two of them were behind absurd paywalls.  Even after accounting for the subscriptions available to them through their school library.  One of the best arguments for the fallacy that information wants to be free is the way in which academic journals prevent the spread of potentially valuable information and make it very difficult for the indirect collaboration between multiple researchers that likely would lead to the fastest advances of our frontier of knowledge.

In the corporate world, there is the concept of the trade secret.  It’s basically a form of information that creates the value in the product or the lower cost of production a specific corporation which provides that corporation with a competitive edge over other companies in its field.  Although patents and trade secret laws provide incentive for companies to innovate and create new products, the way academic journals are operated hinders innovation and advancement without granting direct benefits to the people creating the actual new research. Rather, it benefits instead the publishing company whose profit is dependent on the exclusivity of the research, rather than the value of the research itself to spur scientific advancement and create innovation.

Besides the general science connection, this issue is relevant to a blog like the Chimney because of the way it relates to science fiction and the plausibility and/or obsolescence of the scientific  or world-building premise behind the story.

Many folks who work  in the hard sciences (or even the social sciences) have an advantage in the premise department, because they have knowledge and the ability to apply it at a level an amateur or  a generalist is unlikely to be able to replicate.  Thus, many generalists or plain-old writers who work in science fiction make use of a certain amount of handwavium in their scientific and technological world-building.  Two of the most common examples of this are in the areas of faster-than-light(FTL) travel (and space travel in general) and artificial intelligence.

I’d like to argue that there are three possible ways to deal with theoretical or futuristic technology in the premise of  an SF novel:

  1. To as much as possible research and include in your world-building and plotting the actual way in which a technology works and is used, or  the best possible guess based on current knowledge of how such a technology could likely work and be used.  This would include the possibility of having actual plot elements based on quirks inherent in a given implementation.  So if your FTL engine has some side-effect, then the world-building and the plot would both heavily incorporate that side-effect.  Perhaps some form of radiation with dangerous effects both dictates the design of your ships and the results of the radiation affecting humans dictates some aspect of the society that uses these engines (maybe in comparison to a society using another method?)  Here you are  firmly in “hard” SF territory and are trying to “predict the future” in some sense.
  2. To say fuck it and leave the mechanics of your ftl mysterious, but have it there to make possible some plot element, such as fast travel and interstellar empires.  You’ve got a worm-hole engine say, that allows your story, but you don’t delve into or completely ignore how such a device might cause your society to differ from the present  world.  The technology is a narrative vehicle rather than itself the reason for the story.  In (cinematic) Star Wars, for example, neither the Force nor hyper-drive are explained in any meaningful way, but they serve to make the story possible.
  3. A sort of mix between the two involves  obviously handwavium technology, but with a set of rules which serve to drive the story. While the second type is arguably not true speculative fiction, but just utilizes the trappings for drama’s sake, this type is speculative, but within a self-awarely unrealistic premise.


The first type of SF often suffers from becoming dated, as the theory is disproven, or a better alternative is found.  This also leads to a possible forth type, so-called retro-futurism, wherein an abandoned form of technology is taken beyond it’s historical application, such as with steampunk.

And therein lies a prime connection between our two topics:  A\a technology used in a story may already be dated without the author even knowing about it.  This could be because they came late to the trend  and haven’t caught on to it’s real-world successor; it could also be because an academic paywall or a company on the brink of releasing a new product has kept the advancement private from the layperson, which many authors are.

Readers may be surprised to find that there’s a very recent real-world example of this phenomenon: Artificial Intelligence.  Currently, someone outside the field but who may have read up on the “latest advances” for various reasons might be lead to believe that deep-learning, neural networks, and  statistical natural language processing are the precursors or even the prototype technologies that will bring about real general/human-like artificial intelligence, either  in the near or far future.

That can be forgiven pretty  easily, since the real precursor to AI is sitting behind a massive build-up of paywalls and corporate trade secrets.  While very keen individuals may have heard of the “memristor”, a sort of circuit capable of behavior  similar to a neuron, this is a hardware innovation.  There is  speculation that modified memristors might be able to closely model the activity of the brain.

But there is already a software solution: the content-agnostic relationship  mapping, analysis, formatting, and translation engine.  I doubt anyone reading this blog has ever heard of it.  I would indeed be surprised if anyone at Google or Microsoft had, either.  In fact, I only know it it by chance, myself. A friend I’ve been doing game design with on and off for the past few years told me about it while we were discussing the AI  model used in the HTML5 tactical-RPG Dark Medallion.

Content-agnostic relationship mapping is a sort of neuron simulation technology that permits a computer program to learn and categorize concept-models in a way that is similar to how humans do, and is basically the data-structure underlying  the software “stack”.  The “analysis” part refers to the system and algorithms used to review and perform calculations based on input from the outside world.  “Formatting” is the process of  turning the output of the system into intelligible communication–you might think of this as analogous to language production.  Just like human thoughts, the way this system “thinks” is not  necessarily all-verbal.  It can think in sensory input models just like a person: images, sounds, smells, tastes, and also combine these forms of data into complete “memories”.  “Translation” refers to the process of converting the stored information from the underlying relationship map into output mediums: pictures, text, spoken language, sounds.

“Content agnostic” means that the same data structures can store any type of content.  A sound, an image, a concept like “animal”: all of these can be stored in the same type of data structure, rather than say storing visual information as actual image files or sounds as audio files.  Text input is understood and stored in these same structures, so that the system does not merely analyze and regurgitate text-files like the current statistical language processing systems or use plug and play response templates like a chat-bot.  Further, the system is capable of output in any language it has learned, because the internal representations of knowledge are not stored in any one language such as English.  It’s not translation, but rather spontaneous generation of speech.

It’s debatable whether this system is truly intelligent/conscious, however.  It’s not going to act like a real human.  As far as I understand it, it possesses no driving spirit like a human, which might cause it to act on its own.  It merely responds to commands from a human.  But I suspect that such an advancement is not far away.

Nor is there an AI out there that can speak a thousand human languages and program new AIs, or write novels.  Not yet, anyway.  (Although apparently they’ve developed it to the point where it can read a short story and answer questions about it, like the names of the main characters or the setting. ) My friend categorized this technology as somewhere between an alpha release and a beta release, probably closer to alpha.

Personally, I’ll be impressed if they can just get it reliably answering questions/chatting in English and observably learning and integrating new things into its model of the world.  I saw some screenshots and a quick video of what I’ll call an fMRI equivalent, showing activation of the individual simulated “neurons”* and  of the entire “brain” during some low-level tests.  Wikipedia seems to be saying the technical term is “gray-box testing”, but since I have no formal software-design training, I can’t say if I’m mis-uderstanding that term or not.   Basically, they have zoomable view of the relationship map, and when the program is activating the various nodes, they light on the screen.   So, if you ask the system how many legs a cat has, the node for cat will light up, followed by the node for “legs”, and maybe the node for “possession”.  Possibly other nodes for related concepts, as well.  None of the images I saw actually labelled the nodes at the level of zoom shown, nor do I have a full understanding of how the technology works.  I couldn’t tell anyone enough for them to reproduce it, which I suppose is the point, given that if this really is a useable technique for creating AIs, it’s probably worth more than the blog-platform I’m writing this on or maybe even all of  Google.


Getting back to our original topic, while this technology certainly seemed impressive to me, it’s quite possible it’s just another garden path technology like I believe statistical natural language processing to be.  Science fiction books with clear ideas of how AI works will work are actually quite few and far between.  Asimov’s Three Laws, for example, are not about how robot brains work, but rather about  higher-level things like will AI want to harm us.  In light of what I’ve argued above, perhaps that’s the wisest course.  But then again, plenty of other fields  and technologies are elaborately described in SF stories, and these descriptions used to restrict and/or drive the plot and the actions of the characters.

If anyone does have any books recommendations that do get into the details of how AI works in the story’s world,I would love to read some.


Tags: , , , , , , , ,

Should Authors Respond to Reviews of Their Books

Quite randomly, I stumbled onto a web of posts and tweets detailing an incident of an author commenting on a review of one of their books, being taken to task for it, and then spending what I see as way too much time further entangling themselves in the resulting kerfluffle.  I won’t name this author, because I’m not posting clickbait.  I read both sides of the argument, and while I sided mostly with the reviewer whose space was invaded, I do think some of the nuance on both sides that was over-shadowed by this author’s bad behavior offers valuable insight into both review and more general netiquette.

First, I want to establish some premises:

  1. Posting to the internet is a public act.  That’s true if your post is public rather than on a private blog or Twitter account, say.  But it ignores the complexities of human social interaction.  If I’m having a chat with my friends at IHOP (Insert your franchise pseudo-diner of choice), we’re in public.  So it’s a public act.  But not quite!  If some random patron three tables down were to start commenting on our nastily engaging discussion of who should fuck who in the latest, greatest reverse harem anime, we would probably consider that quite rude.  In fact, we have lots of terms for that sort of thing: butting in, nosy, etc.  I think a valid analogy could be made for the internet.  Sure my Tweet stream is public, but as a nobody with no claim to fame or blue checkmark, it’d be quite a shock for the POTUS to retweet some comment of mine about the economy or the failings of the folks in Washington.  The line can be a bit blurrier if I run a popular but niche politics blog, or if I have a regional news show on the local Fox affiliate.  But just because you can read what I wrote doesn’t mean I expect, much less desire, a response from you.
  2. My blog/website is my (semi-)private space.  Yours is yours.  I own the platform, I decide the rules.  You can write whatever you want on your blog.  Your right to write whatever you want on mine is much less clear-cut.
  3. You have institutional authority over your own work.  While most authors may not feel like they have much power in the publishing world, as the “creator”, they have enormous implied power in the world of fandom and discussion of their own specific work, or maybe even someone else’s, if they’re well-known friends of Author X, say.  If I criticize the War in Vietnam or Iraq, and a four-star general comes knocking on my door the next day, you better fucking believe I’m gonna be uncomfortable.  An author may not have a battalion of tanks at their disposal, but they sure as hell have presence, possibly very intimidating presence if they are well-known in the industry or for throwing their weight around in fandom.

Given these basic premises which I hope I have elaborated on specifically enough, I have some conclusions about what I would consider good standard netiquette.  I won’t say “proper” because I have no authority in this area, nor does anyone, really, to back up such a wording.  But a “reasonable standard of” at least I can make logical arguments for.

  1. Say what you want on your own platform.  And you can even respond to what other people have said, especially if you are not an asshole and don’t name names of people who are not egregious offenders of social norms or who haven’t made ad hominem attacks.
  2. Respect people’s bubbles.  We have a concept of how close to stand to someone we’re in a discussion with in real life, for example, that can be a good metaphor for on what platforms we choose to respond.  Especially as regards critique, since responding to negative comments about oneself is something we know from past experience can be fraught with dangerous possibilities.  I would posit that a person’s private blog is reasonably considered part of their personal space.  A column on a widely-read news site might be considered more public,but then  you have to weigh the consideration of news of your bad behavior being far more public and spreading much faster.You should not enter it without a reasonable expectation of a good reception.  If there is a power imbalance between you and the individual whose space you wish to enter, we have rules for that.  real-world analogies.  For example, before you enter someone’s house you knock or ring the doorbell.  A nice email to the specified public contact email address asking if they would mind if you weighed in is a fairly innocuous way to open communications, and can save face on both sides by avoiding exposing one or the other to the possible embarrassment of being refused or the stress of refusing a local celebrity with no clear bad intentions.
  3. Assume permission is required unless otherwise explicitly  stated.  This one gets its own bullet point, because I think it’s the easiest way to avoid the most trouble.  A public pool you might enter without announcing your presence.  Would you walk into a stranger’s house without knocking? One would hope not.
  4. Question your reasons for engaging.  Nobody likes to be  called sexist.  Or racist.  Or shitty at doing their research.  Or bad at writing.  But reactionary  defenses against what could be construed as such an assertion do not in my mind justify an author wading into a fan discussion.  Or a reader discussion, if one considers “fan” as having too much baggage.  An incorrect narrative fact is likely  to be swiftly corrected by other readers or fans.  Libel or slander is probably best dealt with legally.  A reviewer is not your editor.  You should probably not be quizzing them for advice on how to improve your writing, or story-telling, or world-building.  Thanking a reviewer for a nice review might be best undertaken as a link on your own blog.  They’ll see the pingback, and can choose to engage or not.  At best, one might pop in to provide a link to their own blog where they provide answers  to questions raised in the post in question or a general discussion of the book they may wish to share with those who read the review.  But again, such a link would probably be best following a question on whether any engagement by the author might be appreciated.

Overall, I think I’ve suggested a good protocol for an author tojoin in fan or reader discussions without causing consternation or full on flame wars, and at a cost barely more than a couple minutes to shoot an email.


Tags: , , , , , ,

Machine “Translation” and What Words Mean in Context

One of the biggest commonly known flaws of mahcine translation is a computer’s inability to understand differing meaning in context.  After all, a machine doesn’t know what a “horse” is.  It knows that “caballo” has (roughly) the same meaning in Spanish as “horse” does in English.  But it doesn’t know what that meaning is.

And it certainly doesn’t know what it means when we say that someone has a “horse-face”(/”face like a horse”).


But humans can misunderstand meaning in context, too.  For example, if you don’t know how “machine translation” works, you’d think that machines could actually translate or produce translations.  You would be wrong.  What a human does to produce a translation is not the same as what a machine does to produce a “translation”.  That’s why machine and human translators make different mistakes when trying to render the original meaning in the new language.


A human brain converts words from the source language into meaning and the meaning back into words in the target language.  A computer converts words from the source language directly to words in the target language, creating a so-called “literal” translation.  A computer would suck at translating a novel, because the figures of speech that make prose (or poetry) what they are are incomprehensible to a machine.  Machine translation programs lack the deeply associated(inter-connected) knowledge base that humans use when producing and interpreting language.


A more realistic machine translation(MT) program would require an information web with connections between concepts, rather than words, such that the concept of horse would be related to the concepts of leg, mane, tail, rider, etc, without any intervening linguistic connection.

Imagine a net of concepts represented as data objects.  These are connected to each other in an enormously complex web.  Then, separately, you have a net of linguistic objects, such as words and grammatical patterns, which are overlaid on the concept net, and interconnected.  The objects representing the words for “horse” and “mane” would not have a connection, but the objects representing the concept of meaning underlying these words would have, perhaps, a “has-a” connection, also represented by a connection or “association” object.

In order to translate between languages like a human would, you need your program to have an approximation of human understanding.  A famous study suggested that in the brain of a human who knows about Lindsay Lohan, there’s an actual “Lindsay” neuron, which lights up whenever you think about Lindsay Lohan.  It’s probably lighting up right now as you read this post.  Similarly, in our theoretical machine translation program information “database”, you have a “horse” “neuron” represented by our concept object concept that I described above.  It’s separate from our linguistic object neuron which contains the idea of the word group “Lindsay Lohan”, though probably connected.

Whenever you dig the concept of horse or Lindsay Lohan from your long-term memory, your brain sort of primes the concept by loading it and related concepts into short-term memory, so your “rehab” neuron probably fires pretty soon after your Lindsay neuron.  Similarly, our translation program doesn’t keep it’s whole data-set in RAM constatnly, but loads it from whatever our storage medium is, based on what’s connected to our currently loaded portion of the web.

Current MT programs don’t translate like humans do.  No matter what tricks or algorithms they use, it’s all based on manipulating sequences of letters and basically doing math based on a set of equivalences such as “caballo” = “horse”.  Whether they do statistical analysis on corpuses of previously-translated phrases and sentences like Google Translate to find the most likely translation, or a straight0forward dictionary look-up one word at a time, they don’t understand what the text they are matching means in either language, and that’s why current approaches will never be able to compare to a reasonably competent human translator.

It’s also why current “artificial intelligence” programs will never achieve true human-like general intelligence.  So, even your best current chatbot has to use tricks like pretending to be a Ukranian teenager with bad English skills on AIM to pass the so-called Turing test.  A side-walk artist might draw a picture perfect crevasse that seems to plunge deep into the Earth below your feet.  But no matter how real it looks, your elevation isn’t going to change.  A bird can;t nest in a picture of tree, no matter how realistically depicted.

Calling what Google Translate does, or any machine “translation” program does translation has to be viewed in context, or else it’s quite misleading.  Language functions properly only in the proper context, and that’s something statistical approaches to machine translation will never be able to imitate, no matter how many billions of they spend on hardware or algorithm development.  Could you eventually get them to where they can probably usually mostly communicate the gist of a short newspaper article?  Sure.  Will you be able to engage live in witty reparte with your mutually-language exclusive acquaintance over Skype?  Probably not.  Not with the kind of system we have now.

Those crude, our theoretical program with knowledge web described above might take us a step closer, but even if we could perfect and polish it, we’re still a long way from truly useful translation or AI software.  After all, we don;t even understand how we do these things ourselves.  How could we create an artificial version when the natural one still eludes our grasp?


Tags: , , , , , , ,

The “Next Big Thing” Generation

So, a common topic in many of the writing communities I used to frequent was “the next big thing”.  Generally, what genre of book was going to be the next Twilight vampire romance or Hunger Games dystopia.  I had a lot of fun with those discussions, but only recently have I really stopped to consider how damaging the “next big thing” mindset is.  Not only to literature, but to any field and to our characters as people.

First, it’s damaging to the quality and diversity of books coming out.  If everyone is chasing the next “popular” genre, they aren’t writing, reading, or accepting for publication many good books who just happen to not be the next big thing or who are part of the last big thing.  Even though 90% of the books in the genre of the last big thing were crap, and 7% of the rest were mediocre.

Which ties into my next issue: This attitude creates a hunger for similar books, despite quality or whether the reader would like to try something else because it creates a comfort zone for the reader.  They know they like dystopia because they liked Hunger Games, so they’re more willing to take a chance on another dystopia than a high fantasy or Mundane SF.  (Mundane SF itself having once been the next big thing, thus the proper noun moniker.)

But this is a false comfort zone for many reasons.  The reader may not actually like dystopia, but just that one book.  They may like dystopia but ignore other things they would also really enjoy to keep from having to stray outside their comfort zone.  They may gorge on so many dystopias that they learn to see the flaws in the genre finally,  and therefore ignore a wonderful dystopia down the line, because they’ve moved onto their next big thing.

Or, if they’re jumping on the bandwagon, they may perceive all of YA, say, as mediocre dystopias or obsessed with love triangles.  Perhaps they think all epic fantasy is ASOIAF, which they disliked, and so they don’t take the chance on other works.  For example, maybe they watched the TV show, and aren’t fans of gratuitous sexposition, and so they don’t read the books or similar books because they don’t want to get buried in another avalanche of incest and prostitutes.

Many authors have stories of agents or publishers telling them they have a great book, but they missed the window, or it doesn’t fit with whatever the next big thing is, and so they can’t sell it.  Or they already have ten of these, and even though 8 of them are sub-par, they can’t cancel the contract and pick up this new book.

Or perhaps they like the book, but everyone acquiring fantasy stories right now wants ASOIAF, not comedic contemporary fantasies, or low-key urban fantasies in the original mode without kick-ass leather-wearing, tattoo-bearing heroines with troubled backstories and seriously poor taste in lovers.

And the same can be said for things besides commercial fiction.  Google+ was going to be the next big thing in social media.  Then it was Ello.  Tinder was the next big thing in online dating, and it spawned dozens of clones.  Social media itself is something of a successful next big thing in human interaction and the Internet.  Object-Oriented programming was the next big thing in software design, and yet now the backlash has been going on for years.

Sometimes a next big thing is a great thing.  But the mentality of always hunting for the next big thing is not.  And despite the pressure from our capitalist economy, it might be better in the long term to look for alternatives.  And it is capitalism that is a major driver of this obsession, because history shows even mediocre products can ride the wave of a predecessor to make big money.  Following a successful formula is a bit of a dream situation for many producers of entertainment or products.  That’s why Walmart and most other chains have their own brand version of most popular products, from medicine to housewares to groceries.  The next big thing trend might make some people a decent amount of money in the short-term, but it has long-term effects that have created a sort of creativity pit that we’ll have a hard time climbing out of any time in the near future.  And in the short term, the people who don’t manage to catch the wave, as wonderful as their contributions to literature or software or society may be, are left choking on the dust.

Leave a comment

Posted by on January 19, 2017 in atsiko, Uncategorized


Tags: , , ,

Hiatus: Again

So, as I hate my life and happiness and am currently in the process of working on a video game project, including the coding and a narrative arc that could probably be comfortably condensed into 47 fantasy trilogies, schedule posting on the Chimney will be on indefinite hiatus.  That does not mean I won’t be posting.  I probably will.  But it will be sporadic and all post series are on hiatus.

I’m having a hell of a fun time, so though I am a bit sad that I won’t be ramping back up my posting schedule, I’m not too sad.


Posted by on December 15, 2016 in atsiko, Blogging


Tags: , ,

Magic’s Pawn

One of my favorite styles of magic, though not often see is not a clever way for the protagonist to control the forces of magic, but a system where the forces of magic control the protagonist.  I suppose an ancient prophecy ca work kind of like this or a higher being giving direction, but I’m talking a more concrete and local form of control, yet exercised by a more abstract force.

The forces of magic involved don’t necessarily have to be sentient or intelligent in the way a human is or, even an animal although they could be.  Honestly, I think not being so makes the situation all the more interesting.

Think of the way a bee is involved in an ecosystem: generally as a pollinator.  Now imagine that a human (probably a mage or this world’s equivalent, but not necessarily) has been incorporated into the magical ecosystem of the world in the same way.  Some force of magic has evolved to encourage certain behaviors in human mages that are beneficial to the magic of the world that force of magic is part of.

Perhaps there is a cycle sort of like the water cycle that benefits from humanity in chaos, and so the magic has evolved ways to create that chaos through empowering some mage or person.  The specific actions of the person are irrelevant to the magic, as long as they cause a great upheaval.  The system may not even care if humans would describe this pawn of magic as “evil” or “good”.

Humanoid characters are almost always portrayed as exerting control over the magic of their world, but they are rarely shown to have been integrated into the system–as we are integrated into nature, even despite our control of it–despite what is portrayed in the world’s history as thousands or even millions of years of coexistence.

Where are the magical world equivalents of modern climate change?  There are apocalypses sort of like nuclear bomb analogs.  Mercedes Lackey’s Winds series, for example, with it’s effects on the world of the end of the war depicted in her Gryphon’s series.  But rarely if ever are there subtle build-ups of all the interference caused by humans harnessing magical forces.  Not even on the local level like the magical equivalent of the flooding and ecological damage caused by damning rivers, or the water shortages caused by different political entities failing to cooperate on usage rights of the local river.

I would love to read (or write!) some fantasy exploring a closer relationship between man and magic than simply human master and magical servant/slave.


Tags: , , , , ,