A Brief History of Machine Translation: When did it start?

The history of machine learning and machine translation has an impact on today’s development trends of these technologies. How did simple language rules evolve into advanced neural translations and translation systems that can translate huge amounts of data in seconds? Let’s have a look at the origins of machine translation.

The possibility is automatically translated communication has long been an object of fascination for humankind, which finally materialized for the first time in the 1950s.

It was then that the first computer systems for machine translation were created. During the Cold War, governments were especially motivated to invest not only in cryptography and encryption decoding, but also in systems that would help translate messages quickly.

At that time, the machines that were able to perform the first limited rule-based translations, were the size of small trucks—they were nothing like the personal computers we use today. This meant translation for business or personal applications was completely out of the picture.

The translations themselves were “machine like.” They lacked proper syntax or grammatical correctness. To work properly, the full vocabulary and grammar of many languages had to be entered into the computer, which also lengthened the already time-consuming process.

Computer-aided translations for everyone

Rule-based translation started being used outside the military in the 1990s. Thanks to the rise of the internet, the need for international communication increased like it had never before. Global brands suddenly faced the challenge of distributing and marketing products in many markets and needed a quick way to accelerate translate efforts.

Machine learning

In 1992, the first machine translation service was created. It was a translation of an online forum from English to German.

Soon after, in 1995, Babel Fish Altavista was introduced—a system that could automatically translate text into several languages. The program was freely available online and brought machine translation to the masses.

However, the program was not without its flaws. Translated sentences were often nonsensical because the program could not cope with many of the ambiguities of language.

Machine learning was the answer to these problems. But to learn about machine learning, we have to go back to the 1940s.

Alan Turing, a British computer scientist, noted that to learn properly, a computer should imitate the human mind and work on the basis of constant trials and errors. Turing made attempts to use this technology to produce natural language as early as 1949.

As part of his work, Turing developed the now-famous “Turing Test” or “Imitation Game” that judges a machine’s intelligence based on its ability to fool a person into thinking they’re speaking with another human being (rather than a computer).

So far, no computer has passed this test at a 100% rate. But it’s just a matter of time given the rapid rate of machine learning technology.

From clumsy interpretations to neural machine translations

Jumping back to the present, the machine translation revolution is happening right before our eyes thanks to advanced neural machine learning algorithms.

Modern neural translation systems differ from their rule-based predecessors by having the ability to improve and learn with each subsequent translation. Neural translation systems work similarly to the human brain, constantly looking for the right patterns and making decisions on their own. Today, this technology is everywhere.

At the same time, the translation industry is facing new challenges: the growing importance of voice translations, fast data analysis, and overcoming the communication barrier for the over 6,000 languages used worldwide.

Related Posts

Summa Linguae uses cookies to allow us to better understand how the site is used. By continuing to use this site, you consent to this policy.

Learn More