RSS

Category Archives: atsiko

Poetry, Language, and Artificial Intelligence

Poetry exemplifies how the meaning of a string of words depends not only upon the sum of the meaning of the words, or on the order in which they are placed, but also upon something we call “context”.  Context is essentially the concept that single word (or idea) has a different meaning depending on its surroundings.  These surroundings could be linguistic–the language we are assuming the word to belong to, for example, environmental–say it’s cold out and I say “It’s sooooooo hot.”, or in light of recent events: “The Mets suck” means something very different if they’ve just won a game than if they’ve just lost one.

Poetry is the art of manipulating the various possible contexts to get across a deeper or more complex meaning than the bare string of words itself could convey.  The layers of meaning are infinitely deep, and in fact in any form of creative  writing, it is demonstrably impossible for every single human to understand all of them.  I say poetry is the “art” of such manipulation because it is most often the least subtle about engaging in it.  All language acts manipulate context.  Just using a simple pronoun is manipulating context to express meaning.

And we don’t decode this manipulation separate from decoding the bare language.  It happens as a sort of infinite feedback loop, working on all the different layers of an utterance at once.  The ability to both manipulate concepts infinitely and understand our own infinite manipulations might be considered the litmus test for what is considered “intelligent” life.

 

Returning to the three words in our title, I’ve discussed everything but AI.  The difficulty in creating AGI, or artificial general intelligence lies in the fact that nature had millions or billions of years to sketch out and color in the complex organic machine that grants humans this power of manipulation.  Whereas humans have had maybe 100?  In a classic chicken and egg problem, it’s quite difficult to have either the concept web or the system that utilizes it without the other part.  If the system creates the web, how do you know how to code the system without knowing the structure of the web?  And if the web comes first, how can you manipulate it without the complete system?

You might have noticed a perfect example of how context affects meaning in that previous paragraph.  One that was not intentional, but that I noticed as I went along. “Chicken and egg problem”.  You  can’t possibly know what I meant by that phrase without having previously been exposed to the philosophical question of which came first, the chicken that laid the egg, or the egg the chicken hatched from.  But once you do know about the debate, it’s pretty easy to figure out what I meant by “chicken and egg problem”, even though in theory you have infinite possible meanings.

How in the world are you going to account for every single one of those situations when writing an AI program?  You can’t.  You have to have a system based on very general principles that can deduce that connection from first principles.

 

Although I am a speculative fiction blogger, I am still a fiction blogger.  So how do this post relate to fiction?  When  writing fiction you are engaging in the sort of context manipulation I’ve discussed above as such an intractable problem for AI programmers.  Because you are an intelligent being, you can instinctually engage in it when writing, but unless you are  a rare genius, you are more likely needing to engage in it explicitly.  Really powerful writing comes from knowing exactly what context an event is occurring in in the story and taking advantage of that for emotional impact.

The death of a main character is more moving because you have the context of the emotional investment in that character from the reader.  An unreliable narrator  is a useful tool in a story because the truth is more surprising either  when the character knew it and purposefully didn’t tell the reader, or neither of them knew it, but it was reasonable given the  information both had.  Whereas if the truth is staring the reader in the face but the character is clutching the idiot ball to advance the plot, a readers reaction is less likely to be shock or epiphany and more likely to be “well,duh, you idiot!”

Of course, context can always go a layer deeper.  If there are multiple perspectives in the story, the same situation can lead to a great deal of tension because the reader knows the truth, but also knows there was no way this particular character could.  But you can also fuck that up and be accused of artificially manipulating events for melodrama, like if a simple phone call could have cleared up the misunderstanding but you went to unbelievable lengths to prevent it even though both characters had cell phones and each others’ numbers.

If the only conceivable reason the call didn’t take place was because the author stuck their nose in to prevent it, you haven’t properly used or constructed  the context for the story.  On the other hand, perhaps there was an unavoidable reason one character lost their phone earlier in the story, which had sufficient connection to  other important plot events to be not  just an excuse to avoid the plot-killing phone-call.

The point being that as I said before, the  possible contexts for language or events are infinite.  The secret to good writing  lies in being able to judge which contexts are most relevant and making sure that your story functions reasonably within those contexts.  A really, super-out-of-the-way solution to a problem being ignored is obviously a lot more acceptable than ignoring the one staring you in the face.  Sure your character might be able to send a morse-code warning message by hacking the electrical grid and blinking the power to New York repeatedly.  But I suspect your readers would be more likely to call you out for solving the communication difficulty that way than for not solving it with the characters’ easily  reachable cell phone.

I mention the phone thing because currently, due to rapid technological progress, contexts are shifting far  more rapidly than they did in the past.  Plot structures honed for centuries based on a lack of easy long-range communication are much less serviceable as archetypes now that we have cell phones.  An author who grew up before the age of ubiquitous smart-phones for your seven-year-old is going to have a lot more trouble writing a believable contemporary YA romance than someone who is turning twenty-two in the next three months.  But even then, there’s a lack of context-verified, time-tested plot structures to base such a story on than a similar story set in the 50s.  Just imagine how different Romeo and Juliet would have been if they could have just sent a few quick texts.

In the past, the ability of the characters to communicate at all was a strong driver of plots.  These days, it’s far more likely that trustworthiness of communication will be a central plot point.  In the past, the possible speed of travel dictated the pacing of many events.  That’s  far less of an issue nowadays. More likely, it’s a question of if you missed your flight.  Although…  the increased speed of communication might make some plots more unlikely, but it does counteract to some extent the changes in travel speed.  It might be valuable for your own understanding and ability to manipulate context to look at some works in older settings and some works in newer ones and compare how the authors understanding of context increased or decreased the impact and suspension of disbelief for the story.

Everybody has some context for your 50s love story because they’ve been exposed to past media depicting it.  And a reader is less likely to criticize shoddy contextualizing in when they lack any firm context of their own.   Whereas of course an expert on horses is far more likely to find and be irritated by mistakes in your grooming and saddling scenes than a kid born 16 years ago is to criticize a baby-boomer’s portrayal of the 60s.

I’m going to end this post with a wish for more stories–both SpecFic and YA–more strongly contextualized in the world of the last 15 years.  There’s so little of it, if you’re gonna go by my high standards.

 

Tags: , , , , , ,

Should Authors Respond to Reviews of Their Books

Quite randomly, I stumbled onto a web of posts and tweets detailing an incident of an author commenting on a review of one of their books, being taken to task for it, and then spending what I see as way too much time further entangling themselves in the resulting kerfluffle.  I won’t name this author, because I’m not posting clickbait.  I read both sides of the argument, and while I sided mostly with the reviewer whose space was invaded, I do think some of the nuance on both sides that was over-shadowed by this author’s bad behavior offers valuable insight into both review and more general netiquette.

First, I want to establish some premises:

  1. Posting to the internet is a public act.  That’s true if your post is public rather than on a private blog or Twitter account, say.  But it ignores the complexities of human social interaction.  If I’m having a chat with my friends at IHOP (Insert your franchise pseudo-diner of choice), we’re in public.  So it’s a public act.  But not quite!  If some random patron three tables down were to start commenting on our nastily engaging discussion of who should fuck who in the latest, greatest reverse harem anime, we would probably consider that quite rude.  In fact, we have lots of terms for that sort of thing: butting in, nosy, etc.  I think a valid analogy could be made for the internet.  Sure my Tweet stream is public, but as a nobody with no claim to fame or blue checkmark, it’d be quite a shock for the POTUS to retweet some comment of mine about the economy or the failings of the folks in Washington.  The line can be a bit blurrier if I run a popular but niche politics blog, or if I have a regional news show on the local Fox affiliate.  But just because you can read what I wrote doesn’t mean I expect, much less desire, a response from you.
  2. My blog/website is my (semi-)private space.  Yours is yours.  I own the platform, I decide the rules.  You can write whatever you want on your blog.  Your right to write whatever you want on mine is much less clear-cut.
  3. You have institutional authority over your own work.  While most authors may not feel like they have much power in the publishing world, as the “creator”, they have enormous implied power in the world of fandom and discussion of their own specific work, or maybe even someone else’s, if they’re well-known friends of Author X, say.  If I criticize the War in Vietnam or Iraq, and a four-star general comes knocking on my door the next day, you better fucking believe I’m gonna be uncomfortable.  An author may not have a battalion of tanks at their disposal, but they sure as hell have presence, possibly very intimidating presence if they are well-known in the industry or for throwing their weight around in fandom.

Given these basic premises which I hope I have elaborated on specifically enough, I have some conclusions about what I would consider good standard netiquette.  I won’t say “proper” because I have no authority in this area, nor does anyone, really, to back up such a wording.  But a “reasonable standard of” at least I can make logical arguments for.

  1. Say what you want on your own platform.  And you can even respond to what other people have said, especially if you are not an asshole and don’t name names of people who are not egregious offenders of social norms or who haven’t made ad hominem attacks.
  2. Respect people’s bubbles.  We have a concept of how close to stand to someone we’re in a discussion with in real life, for example, that can be a good metaphor for on what platforms we choose to respond.  Especially as regards critique, since responding to negative comments about oneself is something we know from past experience can be fraught with dangerous possibilities.  I would posit that a person’s private blog is reasonably considered part of their personal space.  A column on a widely-read news site might be considered more public,but then  you have to weigh the consideration of news of your bad behavior being far more public and spreading much faster.You should not enter it without a reasonable expectation of a good reception.  If there is a power imbalance between you and the individual whose space you wish to enter, we have rules for that.  real-world analogies.  For example, before you enter someone’s house you knock or ring the doorbell.  A nice email to the specified public contact email address asking if they would mind if you weighed in is a fairly innocuous way to open communications, and can save face on both sides by avoiding exposing one or the other to the possible embarrassment of being refused or the stress of refusing a local celebrity with no clear bad intentions.
  3. Assume permission is required unless otherwise explicitly  stated.  This one gets its own bullet point, because I think it’s the easiest way to avoid the most trouble.  A public pool you might enter without announcing your presence.  Would you walk into a stranger’s house without knocking? One would hope not.
  4. Question your reasons for engaging.  Nobody likes to be  called sexist.  Or racist.  Or shitty at doing their research.  Or bad at writing.  But reactionary  defenses against what could be construed as such an assertion do not in my mind justify an author wading into a fan discussion.  Or a reader discussion, if one considers “fan” as having too much baggage.  An incorrect narrative fact is likely  to be swiftly corrected by other readers or fans.  Libel or slander is probably best dealt with legally.  A reviewer is not your editor.  You should probably not be quizzing them for advice on how to improve your writing, or story-telling, or world-building.  Thanking a reviewer for a nice review might be best undertaken as a link on your own blog.  They’ll see the pingback, and can choose to engage or not.  At best, one might pop in to provide a link to their own blog where they provide answers  to questions raised in the post in question or a general discussion of the book they may wish to share with those who read the review.  But again, such a link would probably be best following a question on whether any engagement by the author might be appreciated.

Overall, I think I’ve suggested a good protocol for an author tojoin in fan or reader discussions without causing consternation or full on flame wars, and at a cost barely more than a couple minutes to shoot an email.

 
 

Tags: , , , , , ,

Machine “Translation” and What Words Mean in Context

One of the biggest commonly known flaws of mahcine translation is a computer’s inability to understand differing meaning in context.  After all, a machine doesn’t know what a “horse” is.  It knows that “caballo” has (roughly) the same meaning in Spanish as “horse” does in English.  But it doesn’t know what that meaning is.

And it certainly doesn’t know what it means when we say that someone has a “horse-face”(/”face like a horse”).

 

But humans can misunderstand meaning in context, too.  For example, if you don’t know how “machine translation” works, you’d think that machines could actually translate or produce translations.  You would be wrong.  What a human does to produce a translation is not the same as what a machine does to produce a “translation”.  That’s why machine and human translators make different mistakes when trying to render the original meaning in the new language.

 

A human brain converts words from the source language into meaning and the meaning back into words in the target language.  A computer converts words from the source language directly to words in the target language, creating a so-called “literal” translation.  A computer would suck at translating a novel, because the figures of speech that make prose (or poetry) what they are are incomprehensible to a machine.  Machine translation programs lack the deeply associated(inter-connected) knowledge base that humans use when producing and interpreting language.

 

A more realistic machine translation(MT) program would require an information web with connections between concepts, rather than words, such that the concept of horse would be related to the concepts of leg, mane, tail, rider, etc, without any intervening linguistic connection.

Imagine a net of concepts represented as data objects.  These are connected to each other in an enormously complex web.  Then, separately, you have a net of linguistic objects, such as words and grammatical patterns, which are overlaid on the concept net, and interconnected.  The objects representing the words for “horse” and “mane” would not have a connection, but the objects representing the concept of meaning underlying these words would have, perhaps, a “has-a” connection, also represented by a connection or “association” object.

In order to translate between languages like a human would, you need your program to have an approximation of human understanding.  A famous study suggested that in the brain of a human who knows about Lindsay Lohan, there’s an actual “Lindsay” neuron, which lights up whenever you think about Lindsay Lohan.  It’s probably lighting up right now as you read this post.  Similarly, in our theoretical machine translation program information “database”, you have a “horse” “neuron” represented by our concept object concept that I described above.  It’s separate from our linguistic object neuron which contains the idea of the word group “Lindsay Lohan”, though probably connected.

Whenever you dig the concept of horse or Lindsay Lohan from your long-term memory, your brain sort of primes the concept by loading it and related concepts into short-term memory, so your “rehab” neuron probably fires pretty soon after your Lindsay neuron.  Similarly, our translation program doesn’t keep it’s whole data-set in RAM constatnly, but loads it from whatever our storage medium is, based on what’s connected to our currently loaded portion of the web.

Current MT programs don’t translate like humans do.  No matter what tricks or algorithms they use, it’s all based on manipulating sequences of letters and basically doing math based on a set of equivalences such as “caballo” = “horse”.  Whether they do statistical analysis on corpuses of previously-translated phrases and sentences like Google Translate to find the most likely translation, or a straight0forward dictionary look-up one word at a time, they don’t understand what the text they are matching means in either language, and that’s why current approaches will never be able to compare to a reasonably competent human translator.

It’s also why current “artificial intelligence” programs will never achieve true human-like general intelligence.  So, even your best current chatbot has to use tricks like pretending to be a Ukranian teenager with bad English skills on AIM to pass the so-called Turing test.  A side-walk artist might draw a picture perfect crevasse that seems to plunge deep into the Earth below your feet.  But no matter how real it looks, your elevation isn’t going to change.  A bird can;t nest in a picture of tree, no matter how realistically depicted.

Calling what Google Translate does, or any machine “translation” program does translation has to be viewed in context, or else it’s quite misleading.  Language functions properly only in the proper context, and that’s something statistical approaches to machine translation will never be able to imitate, no matter how many billions of they spend on hardware or algorithm development.  Could you eventually get them to where they can probably usually mostly communicate the gist of a short newspaper article?  Sure.  Will you be able to engage live in witty reparte with your mutually-language exclusive acquaintance over Skype?  Probably not.  Not with the kind of system we have now.

Those crude, our theoretical program with knowledge web described above might take us a step closer, but even if we could perfect and polish it, we’re still a long way from truly useful translation or AI software.  After all, we don;t even understand how we do these things ourselves.  How could we create an artificial version when the natural one still eludes our grasp?

 

Tags: , , , , , , ,

The “Next Big Thing” Generation

So, a common topic in many of the writing communities I used to frequent was “the next big thing”.  Generally, what genre of book was going to be the next Twilight vampire romance or Hunger Games dystopia.  I had a lot of fun with those discussions, but only recently have I really stopped to consider how damaging the “next big thing” mindset is.  Not only to literature, but to any field and to our characters as people.

First, it’s damaging to the quality and diversity of books coming out.  If everyone is chasing the next “popular” genre, they aren’t writing, reading, or accepting for publication many good books who just happen to not be the next big thing or who are part of the last big thing.  Even though 90% of the books in the genre of the last big thing were crap, and 7% of the rest were mediocre.

Which ties into my next issue: This attitude creates a hunger for similar books, despite quality or whether the reader would like to try something else because it creates a comfort zone for the reader.  They know they like dystopia because they liked Hunger Games, so they’re more willing to take a chance on another dystopia than a high fantasy or Mundane SF.  (Mundane SF itself having once been the next big thing, thus the proper noun moniker.)

But this is a false comfort zone for many reasons.  The reader may not actually like dystopia, but just that one book.  They may like dystopia but ignore other things they would also really enjoy to keep from having to stray outside their comfort zone.  They may gorge on so many dystopias that they learn to see the flaws in the genre finally,  and therefore ignore a wonderful dystopia down the line, because they’ve moved onto their next big thing.

Or, if they’re jumping on the bandwagon, they may perceive all of YA, say, as mediocre dystopias or obsessed with love triangles.  Perhaps they think all epic fantasy is ASOIAF, which they disliked, and so they don’t take the chance on other works.  For example, maybe they watched the TV show, and aren’t fans of gratuitous sexposition, and so they don’t read the books or similar books because they don’t want to get buried in another avalanche of incest and prostitutes.

Many authors have stories of agents or publishers telling them they have a great book, but they missed the window, or it doesn’t fit with whatever the next big thing is, and so they can’t sell it.  Or they already have ten of these, and even though 8 of them are sub-par, they can’t cancel the contract and pick up this new book.

Or perhaps they like the book, but everyone acquiring fantasy stories right now wants ASOIAF, not comedic contemporary fantasies, or low-key urban fantasies in the original mode without kick-ass leather-wearing, tattoo-bearing heroines with troubled backstories and seriously poor taste in lovers.

And the same can be said for things besides commercial fiction.  Google+ was going to be the next big thing in social media.  Then it was Ello.  Tinder was the next big thing in online dating, and it spawned dozens of clones.  Social media itself is something of a successful next big thing in human interaction and the Internet.  Object-Oriented programming was the next big thing in software design, and yet now the backlash has been going on for years.

Sometimes a next big thing is a great thing.  But the mentality of always hunting for the next big thing is not.  And despite the pressure from our capitalist economy, it might be better in the long term to look for alternatives.  And it is capitalism that is a major driver of this obsession, because history shows even mediocre products can ride the wave of a predecessor to make big money.  Following a successful formula is a bit of a dream situation for many producers of entertainment or products.  That’s why Walmart and most other chains have their own brand version of most popular products, from medicine to housewares to groceries.  The next big thing trend might make some people a decent amount of money in the short-term, but it has long-term effects that have created a sort of creativity pit that we’ll have a hard time climbing out of any time in the near future.  And in the short term, the people who don’t manage to catch the wave, as wonderful as their contributions to literature or software or society may be, are left choking on the dust.

 
Leave a comment

Posted by on January 19, 2017 in atsiko, Uncategorized

 

Tags: , , ,

Hiatus: Again

So, as I hate my life and happiness and am currently in the process of working on a video game project, including the coding and a narrative arc that could probably be comfortably condensed into 47 fantasy trilogies, schedule posting on the Chimney will be on indefinite hiatus.  That does not mean I won’t be posting.  I probably will.  But it will be sporadic and all post series are on hiatus.

I’m having a hell of a fun time, so though I am a bit sad that I won’t be ramping back up my posting schedule, I’m not too sad.

 
2 Comments

Posted by on December 15, 2016 in atsiko, Blogging

 

Tags: , ,

AI and AlphaGo: Why It’s Not the Big Deal It’s Made Out to Be

I’d like to open this post by admitting I am not a Go master.  I’ve played a few times, watch Hikaru no GO when nothing else was on.  But that’s about it.  However, I don’t need to be an expert at the game to point out the flaw in some of the press coverage.  I suspect actual AI researchers already know what I mean.

The first thing to remember is that AlphaGo is a deep-learning program built on a neural network.  What that means is that rather than an artificial intelligence program, AlphaGo is an artificial learning program.  Public perception of AI is still focused on artificial intelligence, but the field has now expanded to cover many related or tangential or component areas of study.  AlphaGo also has some form of reasoning ability.  But this ability is solely related to Go.  You cannot generalize it’s algorithms to other tasks.  In fact, DeepMind even admits there are better programs out there to play Chess.  Chess and Go are both “perfect information”(PI) games.  You can if you so choose know everything about a given game of Chess or Go by looking at the board.  You know all the rules and the position of all the pieces.  PI games are a very popular area of AI research, because programs can do a lot with them.  The information can be reduced to a very small set of states and rules, which is ideal for computers to excel at.  The trick of course is to teach the computer the best set of tactics for taking those rules and the initial state of the game, and trading states with another player to get to the win state.  And yet, even in two PI games, the best AI solution to a player capable of competing with the best of humans is different for each game.

I like to call this specific intelligence, although the more popular terms are weak AI or narrow AI, a kind of non-sentient intelligence focused on solving one task or a narrow range of tasks. But even that is a bit of a misnomer.  After all, the machines aren’t truly smart, just impressively programmed dumb machines.

However, a learning program like AlphaGo comes a bit closer to true intelligence(though not sentience) by being able to take the initially programmed rules and knowledge and extrapolate from them on its own to do things it wasn’t explicitly hard-coded to do by the programmers.  It’s incredibly impressive.  But it’s not “AI” in the way most layfolk think of it.  It’s not general intelligence, even a crude version.  It’s a very sophisticated piece of specific intelligence.

 

 

But there’s a second flaw in the coverage.  Besides the great deal of mystique that’s built up around Go, which isn’t an issue of AI, although some of it is misplaced–for example, another lifeform does not “almost certainly play Go” whereas Chess is too human specific–there’s the issue that even as a powerful example of narrow AI, AlphaGo does not–as stated by some professional players–“play go just like a human but better”.  There has been much talk of its unorthodox tactics, or its algorithm’s focus on win-rate over all else.  Some have even said it made moves “only God could have made”–a common expression of a perfect move.

 

But the real truth is this: much like how genetic code, a style of coding in which a computer is given basic building blocks of code and tasked to mix them up until it finds a closer to optimal solution, AlphaGo has no idea it is playing go.  As far as AlphaGo knows, it’s just trading ones and zeroes around until it finds the desired sequence.  The ways in which a human player attempts to reach the winning board position are inherently different than the way a computer does, because they aren’t really pursuing the same goal.

 

We’re not particularly closer to strong or general AI than we were before.  Go isn’t truly so different from any other PI game.  AlphaGO has not learned intuition.  It’s merely played millions of games of Go subtly adjusting the value it places on a given set of stone positions on the board as it goes until more and more the win-rate increases to the point it wins the game.  Although the process is superficially similar to the way a human learns the game, the lack of framing devices such as vision used by humans has taught it to value entirely different things, and unlike a human, a computer has a perfect memory to go with the perfect information, and it is incapable of making an error.

After that, we can consider the psychological warfare aspect of multi-player games.  AlphaGo may be able to beat anyone Lee Se-dol could, but it cannot judge its opponents experience and thus alter its strategy to beat that player faster or more elegantly.  Instead, it will always play the same way every time, and react no differently to a master making three opening moves than to a novice making the same.  But where a human might see those moves and be able to make a variety of plays depending on their intuition of the players skill or likely next move, AlphaGo will continue to inexorably play exactly the move that will have the highest chance of victory against any and all players, rather than the one with the highest chance of victory against a specific individual.

 
Leave a comment

Posted by on March 15, 2016 in atsiko, Science Fact

 

Tags: , , , , , , , ,

Getting Your Priorities Straight

I’ve had a great time working on this blog.  It’s been loads of fun, I’ve learned a lot about myself, and I’ve met some great people.  I really appreciate everyone who’s read and commented here.

That may sound like a goodbye speech, but what it really means is that I’ll be posting less on here than I used to.  Probably once or twice a month at the most.

This is for several reasons:

  1. I made a commitment to my friends review blog, where I’ll be reviewing various speculative fictions books in many genres.  I’ve posted several reviews there already, and I encourage you to go check them out.  If you like Young Adult books, my two co-reviewers each review about the same number of those a month as I do spec fic books, so definitely check that out.  Most recently, I reviewed Scott Westerfeld’s Afterworlds with my co-reviewer Marisa Greene.  In about a week, you’ll be able to read my reviews of Richard K. Morgan’s The Dark Defiles, the third and final novel in his Steel Remains series.  Here’s the blog: http://notesfromthedarknet.wordpress.com/
  2. I’ve decided to spend more time actually writing books.  High/Epic fantasy has been becoming more popular in the YA field, and many of my projects fit that category, including my current WIP.  After that, you might get to see some reali, live chimney-punk! 😉
  3. I’ve found less and less to write about on here as time goes by.  Part of this is that I’ve said a lot of what I have to say on some subjects, such as world-building.  And part of it is that more general topics, such as genre and writing mechanics have already hit their third cycles on some of the blogs that started out around the same time I did.  Many of those blogs have even stopped posting at all.  I’ve been less active commenting on other blogs for that reason, which means a large decrease in traffic here, as well.

The Chimney is still my home on the web, and will be for the foreseeable future.  I’m not closing it down, and I hope I never do.  This change has already been occurring over the past year or so, it’s just not been official until now.  Once my schedule settles down, and I get into the groove of writing prose, I’ll probably be back to posting here more regularly, especially since writing actual manuscripts really gets my creative and research juices flowing.

 
Leave a comment

Posted by on September 22, 2014 in atsiko, Blogging

 

Tags: , , , ,