A Brief History of Machine Translation: When did it start?

Last Updated August 12, 2021

How did simple language rules evolve into advanced machine translation systems that can convert huge amounts of data in milliseconds? It’s all been decades in the making.

The possibility of automatic machine translation has long been an object of fascination for humankind.

It finally materialized in the 1950s. It was then that we began to create computer systems for machine translation.

Let’s look at the origins of machine translation.

Origins of Machine Translation

Have you seen “The Imitation Game”?

Alan Turing, a British computer scientist and the main character of the 2014 film, noted that to learn properly, a computer should imitate the human mind and work based on constant trials and errors.

Turing made attempts to use this technology to produce natural language as early as 1949.

As part of his work, Turing developed the now-famous Turing Test. It judges a machine’s intelligence based on its ability to fool a person into thinking they’re speaking with another human being (rather than with a computer).

A human questioner asks a series of questions to both a human and a computer and tries to decide which is the human and which is the machine based on the responses.

In 2018, Google Duplex successfully made an appointment with a hairdresser over the phone. The receptionist was completely unaware they were talking to an AI device. Some consider this to be a modern-day Turing Test pass, despite not relying on the true format of its namesake’s original version.

What this demonstrates is an advanced development in machine learning and conversational AI. Turing’s work helped set the stage for developments in machine translation.

Rule Based Translation for Government Use

During the Cold War, both the American and Russian governments were especially motivated to invest not only in cryptography and encryption decoding, but also in systems that would help translate messages quickly.

At that time, machines were developed to perform the first limited rule-based translations. By rule-based, we mean translations that rely on built-in linguistic rules and dictionaries for each language pair.

To work properly, the full vocabulary and grammar of many languages had to be entered into the computer. This lengthened an already time-consuming process. The early translations they produced were indeed quite “machine like,” lacking proper syntax or grammatical correctness.

These machines were also nothing like the personal computers we use today. They were the size of small trucks and weren’t used for business or personal applications.

Computer-Aided Translations for Everyone

Rule-based translation started being used outside the military in the 1990s.

Thanks to the rise of the internet, the need for international communication increased at an unprecedented rate.

Global brands suddenly faced the challenge of distributing and marketing products in many target areas and needed a quick way to accelerate translation efforts.

In 1992, the first public machine translation service was created. It was a translation of an online forum from English to German.

Soon after, in 1995, Babel Fish Altavista was introduced—a system that could automatically translate text into several languages. The program was freely available online and brought machine translation to the masses.

These programs made use of statistical machine translation. This method translates the source material based on the most common previous translations that have been previously done.

These programs were not without their flaws. SMT’s weakness is that it can only translate a phrase if it exists in the existing reference texts.

Additionally, translated sentences were therefore often nonsensical because the program could not cope with many of the ambiguities of context.

Neural Machine Translation

Enter modern neural machine translation (NMT) systems that differ from their rule- and stat-based predecessors by having the ability to improve and learn with each subsequent translation.

NMT adds a key component to the machine translation process: context.

Neural translation systems work similarly to the human brain, constantly looking for the right patterns and making decisions on their own.

It can recognize patterns in the source material to determine a context-based interpretation that can predict the likelihood of a sequence of words. All parts of this model are trained end-to-end to maximize the translation outcomes.

In 2020, state-of-the-art neural machine translation was able to instantly translate texts with 60-90% accuracy, meaning there’s still quite a bit of editing and quality assurance to be done to pass the old Turing Test with this program.

Let Us Take You into the Future of Machine Translation

The translation industry continues to face new challenges: the growing importance of voice translation, the use of ChatGPT for translation, fast data analysis, and overcoming the communication barrier for the over 6,000 languages used worldwide.

Machine translation is transforming the way you do business around the world.

The translation and localization experts here at Summa Linguae Technologies help companies like yours with efficient and practical innovations in multilingual communication.

We can tailor language solutions to meet your specific objectives and help your business grow in all your target markets.

Contact us today to get started.

Related Posts

Summa Linguae uses cookies to allow us to better understand how the site is used. By continuing to use this site, you consent to this policy.

Learn More