“Sokath, His Eyes Uncovered!”, or, Is the Universal Translator A Myth?

by Mina

There are two series which have coloured our collective consciousness when we think of the concept of a universal translator: The Hitchhikers Guide to the Galaxy and Star Trek (in all its guises). As a linguistic aside, “hitchhikers” was initially spelled in various different ways (hitch hiker, hitch-hiker, hitchhiker, with or without the apostrophe) until it settled as “The Hitchhikers Guide” in around 2000 (even the abbreviation has various forms: HG2G, tHGttG, HHGTTG, etc.). One wonders how many pitfalls communication may involve if one word can have so many variants within one language.

HG2G began its life in 1978 as a BBC Radio 4 series. This was followed by five novels, with a TV series sandwiched between novels two and three. The author, Douglas Adams, was involved in all of these versions, but they are far from identical to each other, and it is best to see them as a collection of leitmotifs. I am ignoring the 2005 film, which feels like a huge “mistranslation” (even if Adams was briefly involved in it before his death), missing the point on several levels – it is an attempt to turn HG2G into a PC, action story with a romantic subplot, dumbed down to the lowest common denominator, obsessed with Vogons and not at all true to the original radio/TV series or to the early-1980s-Britain pastiche that was so much fun. This sense of fun is very present in one leitmotif, the Babel fish described by the “book” as:

“The Babel fish is small, yellow, leech-like, and probably the oddest thing in the Universe. It feeds on brainwave energy received not from its own carrier, but from those around it. It absorbs all unconscious mental frequencies from this brainwave energy to nourish itself with. It then excretes into the mind of its carrier a telepathic matrix formed by combining the conscious thought frequencies with nerve signals picked up from the speech centres of the brain which has supplied them. The practical upshot of all this is that if you stick a Babel fish in your ear you can instantly understand anything said to you in any form of language.”

I can always hear the voice in my mind of Peter Jones as the “book” narrating this passage in both the radio and original TV series (the “book” is almost a character in its own right). The description goes on to state that it was a “mind-bogglingly” useful invention and there is a hysterically funny passage on how it was used to disprove the existence of God (incidentally, a whole generation of SF nerds integrated “mind-boggling” and “I don’t give a dingo’s kidneys” into their everyday vocabulary due to this passage). Although the Babel fish makes it possible for the most unprepossessing human to ever travel the galaxy, Arthur Dent, to understand and communicate with aliens, the Babel fish is also dangerous:

“…the poor Babel fish, by effectively removing all barriers to communication between different races and cultures, has caused more and bloodier wars than anything else in the history of creation.”

Star Trek (ST) does not have a “Babel fish” but it does have a “universal translator”. It begins its life in Gene Roddenberry’s original ST as a handheld device and by Star Trek: The Next Generation (STNG), it has been incorporated into the communicator pins all Starfleet personnel wear on their uniforms. All Starfleet vessels are also equipped with a universal translator. Although Enterprise is seen as a poor cousin to other series in the ST canon, it is actually the only series to look in depth into the development of the universal translator that is mostly taken for granted in the series and films that take place “later” (if we look at the ST universe chronologically). In Enterprise, we actually have a skilled linguist on the crew, Ensign Hoshi Sato. We see that new languages have to be added to the universal translator by gathering enough data to build a “translation matrix” (a data construct facilitating the conversion of symbols and sounds from one language to another). And Hoshi Sato does not just use this translation matrix, she improves upon it, inventing the “linguacode” translation matrix to anticipate and speed up the conversion of new and unknown languages. She is a main character whose linguistic skills are used time and again to get the crew out of thorny situations. I cannot stress how unusual this is in an SF (or any) series. We will come back to the idea of “training” a universal translator and translation matrices later when we look at Machine Translation technology today.

Not everyone sees a universal translator as a good thing in the ST universe. There is a scene in ST Discovery between Burnham and a Klingon (Kol), where Burnham sees the universal translator as a means of communication and reaching a peaceful accord, and Kol sees it as another attempt by the Federation to subsume Klingon culture. In fact, my husband was annoyed by the fact that the Klingons in Discovery speak Klingon all the time; I actually rather enjoyed the series’ courage on this point, as subtitling puts off some viewers, but I think Klingons speaking amongst themselves should speak Klingon. Interestingly, Klingon began as gibberish but was later developed into a language by Marc Okrand for ST III: The Search for Spock in 1984 based on some phrases originally developed by the actor James Doohan (Scotty) in ST: The Motion Picture in 1979. Okrand developed a grammar and expanded the vocabulary and, should you be so inclined, you can actually learn Klingon online through the Klingon Language Institute. It is fascinating to see interest from both the producers and viewers in a constructed language yet, at the same time, most of the series hinges on the existence of a universal translator.

The universal translator is shown to have its limits in the STNG episode Darmok. This episode is based on the premise that a universal translator cannot make sense of a language based on abstraction and metaphors, deeply rooted in culture, myth and history. Stranded on a planet with a Tamarian captain Dathon (a Child of Tama), Picard struggles to learn enough about Tamarian metaphors to communicate with Dathon as they face a common enemy. The Tamarian language is described by Troi as a language based on narrative imagery, with reference to the individuals and places which appear in their mytho-historical accounts, much like using “Juliet, on her balcony” as a metaphor for romance. Picard slowly learns to communicate with Dathon who tells him the story of “Darmok and Jalad, at Tanagra”. In exchange, Picard reframes the earth myth of “Gilgamesh and Enkidu, at Uruk” for him. The whole episode is an absolute delight for anyone interested in languages, communication, linguistics, logic and alien thinking. At the end, Picard has learned enough to successfully communicate his regret for the death of Dathon to his first officer and that he and Dathon reached communion or true communication before his death:

TAMARIAN FIRST OFFICER: Zinda! His face black. His eyes red— (expressing anger)

PICARD: —Temarc! The river Temarc. In winter. (asking for him to be silent and listen)

FIRST OFFICER: Darmok? (asking if his Captain’s plan was successful)

PICARD: …and Jalad. At Tanagra. Darmok and Jalad on the ocean. (the plan of two strangers working together to fight a common threat was successful)

FIRST OFFICER (to others, amazed): Sokath! His eyes open! (thank God, you understood)

PICARD (continuing): The beast of Tanagra. Uzani. His army. (shaking his head) Shaka, when the walls fell. (explaining how Dathon died and his regret at Dathon’s death)

FIRST OFFICER: Picard and Dathon at El-Adrel. (a new metaphor enters the Tamarian language to signify successful communication between two races who were strangers to each other)

I have added the “translation” in brackets after each utterance but the lovely thing about this episode is that, having accompanied Picard and Dathon on their journey at El-Adrel, the viewer can understand the entire exchange without help.

In his article in The Atlantic, Ian Bogost feels that the episode has its shortcomings because it tries to limit the language of the Children of Tama to our understanding of how language works, i.e. using our familiar denotative speech methods. Bogost stresses that the Tamarian language works more like poetry or allegories, which replace one thing with another (rather than simply comparing one thing to another like metaphors do). But, he argues, the Children of Tama are not replacing one image with another, they are using the familiar logic (the intention) behind each situation to which they refer to communicate in a manner that is almost computational, i.e. procedural rhetoric takes precedence over verbal and visual rhetoric and dictates their immediate actions. Whether or not you feel that Darmok lends itself to this level of analysis or that Bogost is right or wrong, the whole episode serves to demonstrate a completely different linguistic system and logic.

How close are we to such a universal translator? How effective are Machine Translation (MT) tools? The best-known MT tool is Google Translate, which has moved from being just a Website to also existing in App form for mobile phones, and from just translating text to also translating text contained in images and translating speech. How accurate is it, for example, when translating into English? As a linguist, I can tell you that it depends on the language combination. It copes reasonably well with Romance languages where the syntax is not too dissimilar from English, less well with German where the syntax is quite different, and not at all well with Estonian, where the syntax and logic of the language are very different (and it is a small and rare language with a more limited dataset). MT currently needs to be used with caution and with a clear aim in mind: it can be very useful if you want to know the gist of an article, for example, to run it through an MT tool to obtain a rough translation. However, it is dangerous to rely on an MT of a medical or legal text where precision is vital. MT can sound very convincing until you get a native speaker to check its accuracy, since MT has to cope with languages being flexible and ambiguous, with meaning being derived not just from a word but also its co-text (e.g. collocations) and context (e.g. a word where the meaning changes depending on where you read it, in a novel – “Oh, that’s criminal!”, where I consider your taste in wallpaper a travesty – or an article – “David was arrested for his criminal activities”, where David really did commit a crime).

That said, how MT works has changed over time: early rule-based systems (using lexical, syntactic and semantic rules that hit their limits at the sheer number of exceptions and variables required) were replaced in the 1990s with statistical methods (using a large corpus of examples but which were divorced from context, thus often leading to errors) and, more recently, we have moved towards neural MT (NMT). It is NMT that most resembles the language matrices of the universal translator mentioned in Enterprise and where fiction and reality begin (on a humble scale as yet) to converge. In NMT, the input is a sentence in the source language, with source language grammar, and the output is a sentence in the target language, with target language grammar. In between, we have an algorithm, which is an application of deep learning in which massive datasets of translated sentences are used to “train” a model capable of translating between any two languages. For example, it must be able to cope with all variants of the word “hitchhiker”.

One established NMT structure is the encoder-decoder architecture, composed of two recurrent neural networks (RNNs) used together to create a translation model. Textual data is transformed into numeric form and back into different textual data (its translation):

“An encoder neural network reads and encodes a source sentence into a fixed-length vector. A decoder then outputs a translation from the encoded vector. The whole encoder–decoder system, which consists of the encoder and the decoder for a language pair, is jointly trained to maximize the probability of a correct translation given a source sentence.” (https://machinelearningmastery.com/introduction-neural-machine-translation/)

This architecture has problems with long sequences of text which is why we now have an “encoder-decoder with attention” model. The system learns to only focus on the “relevant” part of the sequence to translate each individual word, so that length is no longer a problem. Google Translate uses this architecture and feeds it with millions of stored sentences. It is a system that still has its problems, however: the training and inferences speed is still too slow, it can be ineffective dealing with rarer words (it struggles with large vocabularies and a myriad of contexts) and it sometimes fails to translate a word it does not recognise, simply leaving the source-language word in the target-language sentence. MT initially focused mainly on the written word, but work is now being done on the spoken word as well.

So is a universal translator possible in our world? (N)MT will continue to improve, that is for sure. Whether it can ever fully replace the need for a human linguist remains to be seen. It cannot yet do what is one of our biggest strengths of the human mind: it cannot make inferences and assumptions based on context, background knowledge, culture and an instinct for which rules can be broken and which not. It cannot spot mistakes, decipher bad style or pick up nuances of embedded, deeper meanings. MT is based on algorithms and probability, it works with separate units (numeric representations of words) and even with the development of “attention” and “deep learning”, it cannot yet get a quick overview when examining a large sequence of units or adjust to circumstances when making a decision. It is not yet truly flexible. It is possible that one day, computers will imitate the way the human mind makes connections (and recreates the intention of the communication in the source language in the target language) so closely that we will not be able to tell the difference. The operative word is imitate: we are still a long way from a “sentient” computer able to think autonomously rather than applying a set of complex mathematical rules. That does not mean we will never get there but we are not yet at a point where the computer can translate the full meaning of “Picard and Dathon at El-Adrel” into other languages.

~

Bio:

Mina is a translator by day, an insomniac by night. Reading Asimov’s robot stories and Wyndham’s The Day of the Triffids at age eleven may have permanently warped her view of the universe. She publishes essays in Sci Phi Journal as well as “flash” fiction on speculative sci-fi websites and hopes to work her way up to a novella or even a novel some day.

Feel free to leave a comment

Previous Story

The Existence Of God

Next Story

The Minotaur’s Rebellion

Latest from Fact & Opinion