Category Archives: Speculative Linguistics

Machine “Translation” and What Words Mean in Context

One of the biggest commonly known flaws of mahcine translation is a computer’s inability to understand differing meaning in context.  After all, a machine doesn’t know what a “horse” is.  It knows that “caballo” has (roughly) the same meaning in Spanish as “horse” does in English.  But it doesn’t know what that meaning is.

And it certainly doesn’t know what it means when we say that someone has a “horse-face”(/”face like a horse”).


But humans can misunderstand meaning in context, too.  For example, if you don’t know how “machine translation” works, you’d think that machines could actually translate or produce translations.  You would be wrong.  What a human does to produce a translation is not the same as what a machine does to produce a “translation”.  That’s why machine and human translators make different mistakes when trying to render the original meaning in the new language.


A human brain converts words from the source language into meaning and the meaning back into words in the target language.  A computer converts words from the source language directly to words in the target language, creating a so-called “literal” translation.  A computer would suck at translating a novel, because the figures of speech that make prose (or poetry) what they are are incomprehensible to a machine.  Machine translation programs lack the deeply associated(inter-connected) knowledge base that humans use when producing and interpreting language.


A more realistic machine translation(MT) program would require an information web with connections between concepts, rather than words, such that the concept of horse would be related to the concepts of leg, mane, tail, rider, etc, without any intervening linguistic connection.

Imagine a net of concepts represented as data objects.  These are connected to each other in an enormously complex web.  Then, separately, you have a net of linguistic objects, such as words and grammatical patterns, which are overlaid on the concept net, and interconnected.  The objects representing the words for “horse” and “mane” would not have a connection, but the objects representing the concept of meaning underlying these words would have, perhaps, a “has-a” connection, also represented by a connection or “association” object.

In order to translate between languages like a human would, you need your program to have an approximation of human understanding.  A famous study suggested that in the brain of a human who knows about Lindsay Lohan, there’s an actual “Lindsay” neuron, which lights up whenever you think about Lindsay Lohan.  It’s probably lighting up right now as you read this post.  Similarly, in our theoretical machine translation program information “database”, you have a “horse” “neuron” represented by our concept object concept that I described above.  It’s separate from our linguistic object neuron which contains the idea of the word group “Lindsay Lohan”, though probably connected.

Whenever you dig the concept of horse or Lindsay Lohan from your long-term memory, your brain sort of primes the concept by loading it and related concepts into short-term memory, so your “rehab” neuron probably fires pretty soon after your Lindsay neuron.  Similarly, our translation program doesn’t keep it’s whole data-set in RAM constatnly, but loads it from whatever our storage medium is, based on what’s connected to our currently loaded portion of the web.

Current MT programs don’t translate like humans do.  No matter what tricks or algorithms they use, it’s all based on manipulating sequences of letters and basically doing math based on a set of equivalences such as “caballo” = “horse”.  Whether they do statistical analysis on corpuses of previously-translated phrases and sentences like Google Translate to find the most likely translation, or a straight0forward dictionary look-up one word at a time, they don’t understand what the text they are matching means in either language, and that’s why current approaches will never be able to compare to a reasonably competent human translator.

It’s also why current “artificial intelligence” programs will never achieve true human-like general intelligence.  So, even your best current chatbot has to use tricks like pretending to be a Ukranian teenager with bad English skills on AIM to pass the so-called Turing test.  A side-walk artist might draw a picture perfect crevasse that seems to plunge deep into the Earth below your feet.  But no matter how real it looks, your elevation isn’t going to change.  A bird can;t nest in a picture of tree, no matter how realistically depicted.

Calling what Google Translate does, or any machine “translation” program does translation has to be viewed in context, or else it’s quite misleading.  Language functions properly only in the proper context, and that’s something statistical approaches to machine translation will never be able to imitate, no matter how many billions of they spend on hardware or algorithm development.  Could you eventually get them to where they can probably usually mostly communicate the gist of a short newspaper article?  Sure.  Will you be able to engage live in witty reparte with your mutually-language exclusive acquaintance over Skype?  Probably not.  Not with the kind of system we have now.

Those crude, our theoretical program with knowledge web described above might take us a step closer, but even if we could perfect and polish it, we’re still a long way from truly useful translation or AI software.  After all, we don;t even understand how we do these things ourselves.  How could we create an artificial version when the natural one still eludes our grasp?


Tags: , , , , , , ,

The Translation Problem: People vs. Computers

In my last post, I introduced the topic of natural language processing and discussed the issue of how the context of a piece of language has an enormous impact on its translation into another language.  In this post, I want to address issue with translation.  Specifically, I want to talk how language is really an integrated function of the way the human brain models the world, and why this might make it difficult to create a machine translator isolated from the rest of an artificial intelligence.

When a human uses language they are expressing things that are based upon an integrated model of the universe in which they live.  There is a linguistic model in their brain that divides up their concept of the world into ideas representable by words.  For example, let’s look at the word “pit bull”.  (It’s written with two words, but as a compound word, it functions as a single noun.)  Pit bull is a generic term for a group of terrier dog breeds.  Terriers are dogs.  Dogs are mammals.  Mammals are animals.  This relationship is called a hypernym/hyponym relationship.  All content words(nouns/verbs/adjectives) are part of a hierarchical tree of hypo-/hyper-nym relationships.

So when you talk about a pit bull, you’re invoking the tree to which it belongs, and anything you say about a pit bull will trigger the conversational participants’ knowledge and feelings about not only pit bulls, but all the other members of the tree to which it belongs.  It would be fairly trivial programming-wise, although possibly quite tedious data-entry-wise to create a hypo-/hyper-nym tree for the couple-hundred-thousand or so words that make up the core vocabulary of English.  But to codify the various associations to all those words would be a lot more difficult.  Such a tree would be a step towards creating both a world-model and knowledge-base, aspects of artificial intelligence not explicitly related to the problem of machine translation.  That’s because humans use their whole brain when they use language, and so by default, they use more than just a bare set of grammar rules when parsing language and translating between one language and another.

One use of such a tree and its associations would be to distinguish between homographs or homonyms.  For example, if the computer sees a word it knows is associated with animals, it could work through the hypernym tree to see if “animal” is a hypernym or association with say, the word horse.  Or, if it sees the word “grain”, it could run through the trees of other words to see if they are farming/crop related or wood-related.  Or, perhaps, crossing language boundaries, if a language has one word that covers all senses of “ride”, and the other language distinguishes between riding in a car, or riding a horse, the program could use the trees to search for horse- or car-related words that might let it make a best guess one which verb is appropriate in a given context.

The long and short of the case I intend to make is that a true and accurate translation program cannot be written without taking enormous steps down the path of artificial intelligence.  A purely rule-based system, no matter how many epicycles are added to it, cannot be entirely accurate, because even a human being with native fluency in both languages and extensive knowledge and experience of translating cannot be entirely accurate.  Language is too malleable and allows too many equivalent forms to always allow for a single definitive translation of anything reasonably complex, and this is why it is necessary to make value judgements based on extra-linguistic data, which can only be comprehensively modeled by using techniques beyond pure grammatical rules.


In the next post, I’ll talk about statistical methods of machine translation, and hopefully I’ll be following that up with a critique and analysis of the spec fic concept of a universal translator.


Tags: , , , , , ,

SpecLing #2: A Language Without Nouns?

Better late than never, I thought I’d talk today about the possibility of a language without nouns.  Last time, I talked about a language without verbs, and delved into what exactly defines a part of speech.  Here’s a quick recap:

  1. Parts of speech can be defined in a few ways: lexically, where a given root is only acceptable as one part of speech; syntactically, where a their location in the sentence and the words surrounding them are applied to the root, and there may be no lexical distinction involved; and morphologically, where a category of roots undergo a specific set of morphological processes.
  2. Nouns are content words, meaning they have a meaning that can exist independently of a sentence.
  3. Verbs and noun roots in English can in fact switch categories.  You can bag your groceries by putting them in a bag, and rope you some cattle with a rope.


There have been several languages and language families put forward as lacking nouns.  Tongan, Riau Indonesian, the Salishan languages of Oregon.  In the case of Riau, it seems words are lexically underspecified–that is, they can be used in any category.  In Salishan languages, you have what is often considered to have a verbal category, while not having a nominal one.  So, the word for “dog” is actually a verb meaning “to be a dog”  The same goes for being a man.  One mans.


A question arises here:  While “man”-ness is a verb syntactically and morphologically in Salishan languages, is it possible to argue that these “verbs” aren’t just nouns by another form?  In the previous paragraph, I used the word “man” as a “verb” in English.  Are such verbs in Salishan merely placeholders for a true noun?  One difference in using verbs as opposed to nouns is the removal of the tedious “to be” constructions in English.  “He is a man.” requires more words than “He mans”.  That brings is back to the issue of the multiple definitions of a part of speech.  Lexically, its reasonable to say a language with such constructions lacks nouns.  Morphologically, if a root undergoes the same processes as words that are verbs, it’s reasonable to conclude it’s a verb.  The only argument to be had in this case is syntactic.  A predicate requires a verb.  If a Salishan pseudo-verb can be a predicate all on its own, then doesn’t that imply it’s actually a bona fide verb?  But verbs must be nominalized to become arguments of another verb, in which case you could argue they aren’t.  Now, the truth is that a noun/verb distinction has never been 100% delineable, so I think it can be argued in good faith that these roots are truly verbs.

In which case, it’s much simpler to conclude that we can have a language without nouns than that we can have a language without verbs.


As far as methods to construct a noun-less grammar, we have:

  1. Stative verbs as in Salishan
  2. I don’t know?  Any suggestions?

Tags: , , ,

SpecLing #1: A Language Without Verbs?

This is the first in a series of posts on the subject of speculative linguistics, the study of language in a speculative context.  For example, studying constructed languages(conlangs), possible forms of alien communication, languages which violate earthly linguistic universals, etc.  Basically, it’s the application of real-world linguistics to non-real-world linguistic occurrences.

In this post, I’m going to talk about an interesting hypothetical situation involving a human-usable language without verbs.  I am going to get a bit technical, so to start I’ll give a short overview of the issues involved, and a refresher on some basic terms:

Parts of speech:  A verb is a part of speech, along with things like nouns, adjectives, adverbs, etc.  It is generally considered that all human languages have at least two parts of speech, verbs and nouns.  When linguistics study pidgins–contact languages developed by two groups who speak un-related languages–there are almost invariably nouns and verbs, the suggestion being that these two categories are required for human language.

Content words vs. function words:  Verbs, like nouns and adjectives, are “content words”.  That means they contain some inherent meaning.  Function words are things like prepositions and articles, which have a grammatical use, but don’t contain basic concepts like nouns and verbs do.

However, if you look at a verbs, you can see that they do in fact have some similar grammatical elements beyond the basic concept they represent.  Tense, mood, aspect, person, number, etc, are all functions of verbs in various languages.  You can abstract out these features into function words, and in fact some languages do.

Something else to consider is that most languages have a very restricted pool of function words, whereas they can usually contain any number of content words–one for every concept you can devise.  And yet not all languages have the same number or even a similar set of function words.  So the question becomes, could you, by expansion of the categories of function words of various types and with assistance from other content categories, split up the responsibilities of the verb category?

Each part of speech consists, in the most basic sense, of a set of responsibilities for the expression of thought.  The only difference between function words and content words is whether there are some higher concepts overlaid on top of those responsibilities.  Now, there are, to an extent, a finite number of responsibilities to be divided among the parts of speech in a language.  Not all languages have the same parts of speech, either.  This suggests that we can decide a priori how to divide out responsibilities, at least to an extent.  Assuming that a part of speech is merely a set of responsibilities, and knowing that these sets can vary in their reach from language to language, it is possible that we could divide the responsibilities between sets such that there is not part of speech sufficiently similar to the verb to allow for that classification.

Even that conclusion is assuming we’re restricted to similar categories as used by currently known human languages, or even just similar divisions of responsibility.  However, that isn’t necessarily the case.  There are, to my mind, two major ways to create a verb-less language:

1. Vestigial Verbs: As this is a topic and a challenge in language that has interested me for a long time, I’ve made several attempts at creating a verb-less language, and over time, I like to think they have gotten less crude.  One of my early efforts involved replacing verbs with a part of speech I called “relationals”.  They could be thought of as either verbs reduced to their essence, or atrophied over time into a few basic relationships between nouns.  Basically, they are a new part of speech replacing verbs with a slightly different responsibility set, but sharing a similar syntax, otherwise.  I was very much surpsied, then, while researching for this post, to come across a conlang by the name of Kēlen, created by Sylvia Sotomayor.  She also independently developed the idea of a relational, and even gave it the same name.  Great minds think alike?

Although our exact implementations differed, our ideas of a relational were surprisingly similar.  Basically, it’s what it says on the tin, it expresses a relationship between nouns(noun phrases).  However, they have features of verbs, such as valency–the number of arguments required by a verb, and Kelen included tense inflections, to represent time, although my own did not, and rather placed temporal responsibility on a noun-like construction representing a state of being.

An example of a relational, one that appears to be the basic relational for Sotomayor’s Kelen and my own conlang is that of “existence”.  In English we would use the verb “to be”: “there is a cat.”  Japanese has the two animacy-distinct verbs “iru” and “aru”: “Neko ga iru.”  Kelen makes use of the existential relational “la”: “la jacela” for “there is a bowl.”  In my conlang, the existential relational was mono-valent, somewhat equivalent to an intransitive verb, but Kelen can express almost any “to be” construction: “The bowl is red.”: “la jacēla janēla”, which takes a subject and a subject complement, and is thus bi-valent.  In English we have a separate category for these kind of verbs, “linking” verbs, as opposed to classifying them as transitive, but both categories are bi-valent, taking two arguments.

2. No Verbs: Another experiment of mine in a verb-less language took what I consider to be the second approach, which is to simply eliminate the verb class, and distribute its responsibilities among the other parts of speech.  Essentially, you get augmented nouns or an extra set of “adverbial” (though that’s an odd name considering there are no verbs, it’s the closest equivalent in standard part of speech) words/morphemes.  This requires thinking of “actions” differently, since we no longer have a class of words that explicitly describe actions.

My solution was to conceive of an action as a change in state.  So to carry the equivalent of a verbs information load, you have two static descriptions of a situation, and the meaning is carried by the contrast of the two states.  A simple, word-for-word gloss using English words for the verb “to melt” might be a juxtaposition of two states, one describing a solid form of the substance, and the other a liquid form: “ present.water”.  There are all sorts of embellishments, such as a “manner” or “instrumental” clause that could be added: “ present.water instrument.heat”, for example.  (The word after the period is the content word, and before is some grammatical construction expressing case or tense.)


There are probably many more methods of creating a verb-less language.  A relational language would probably be the easiest for the average person to learn, because of the similarity to a verbed language.  However, a statve language doesn’t seem impossible to use, and depending on the flexibility of morphology and syntax in regards to which responsibilities require completion in a given sentence, could be an effective if artificial method of human communication.


Next time, I’m going to consider the possibility of a noun-less language.  I’ve never tried one before, and honestly I don’t have high hopes for the concept.  Especially if it had normal verbs.  How would verb arguments be represented in a language without nouns?  Well, that’s really a question for the next post.

If anyone has some thoughts on the usability of a verb-less language, or the structure, or can recommend me some natlangs or conlangs that eschew verbs, I’d love to hear about it in the comments.


Posted by on November 11, 2013 in atsiko, Conlanging, Linguistics, Speculative Linguistics


Tags: , , , , ,