Machine Translation: A Brief History

Machine Translation: A Brief History

Published in Machine translation on 04/02/2020

Machine translation is a highly topical subject. No matter which conference Translation Agency Vienna| Connect Translations Austria GmbH attends, there is an abundance of talk about the remarkably good results that machine translation systems are able to produce; however, this has certainly not always been the case. After all, machine translation has had a turbulent past.

Machine Translation: The Early Days

The dream of creating a machine translation system is nearly as old as computer science itself. Not long after the first computers had been developed, people became eager to also use computers for decoding texts in foreign languages. The U.S. military were the first to start experimenting with this project in the 1950s and 1960s using Russian texts. What followed was a giant wave of initial euphoria. At long last it was possible to gain at least a basic understanding of Russian texts without the help of translators or interpreters. If the very first results could already provide at least a basic insight into texts that used to be completely incomprehensible, then it could only be a matter of time until fully automatic translation programmes would be able to produce machine translations of similar quality as translations that were done by humans; or so the theory goes. But they could not have been more wrong. Despite extensive research and funding from high-level sponsors, the quality of machine translations barely improved over the following years. The reason for this was simple. These early trials were carried out under the assumption that language was nothing but a rule-based “code” that simply had to be “decoded”. But every translator knows that when it comes to translating elements such as jokes, puns or culture-specific items, merely exchanging one word for another is not nearly enough.

Machine Translation: At Its Lowest Ebb

Finally, in 1966, the so-called ALPAC report put a bitter end to the initial euphoria. The ALPAC report claimed that machine translation was fundamentally unfeasible, thus making the past 20 years of research suddenly obsolete. What followed were decades of disillusionment, during which research in the field of machine translation was scarce. It was not until the 1980s that people started reinvesting in research into machine translation. By then, researchers had realised that language should not be seen as a code that needs to be deciphered. Instead, language needs to be viewed as a complex system of contexts and interconnections that can also be analysed statistically. Thanks to this realisation, statistical machine translation was born. Statistical machine translation programmes make use of the fact that some words co-occur together more frequently than others, meaning that some combinations of words are statistically more likely. Statistical machine translation programmes produce far more natural-sounding results than rule-based machine translation programmes. They can therefore be considered to represent a new little breakthrough in the area of machine translation.

Machine Translation: New Perspectives

Still, the results produced by statistical machine translation programmes were rated unsatisfactory until late into the 2010s. It was not until the emergence of so-called neural machine translation programmes, which have been developed since 2016 and make use of artificial intelligence, that the quality of machine translations could be improved significantly. It remains unclear what future developments still lie ahead in the field of machine translation. There is only one thing that seems certain: It’s here to stay. Even so, in many cases such as SEO website translation, “human translation” is still going to significantly outperform machine translation.