In 2006, Google launched Google Translate, a free translation tool that could translate text, websites, and documents. At that time, free commercial machine translation tools were still a rarity, and although the translated text on Google Translate had poor grammatical accuracy, it still had a major impact on machine translation research.
Today, machine translation systems are being used daily all around the world by millions of people to communicate, interact, and understand other people better. Tools like Google Translate, Amazon Translate, and DeepL translator can provide relatively high-quality translation thanks to advancements in the machine translation field. However, before tools like Google Translate, there were storage devices containing bibliographies and inventions made with film cameras and typewriters. Here’s a look at how machine translation has evolved over the years.
🔍 Early machine translation
Machine translation can trace its roots back to work done by Arabic scholars in the 17th century, many of whom believed that language was sacred, powerful, and coded. One of these scholars, Al-Kindi, created a framework for cryptanalysis, statistical techniques, and frequency analysis that are still relevant in machine translation today. A majority of the work was not done until the 20th century when George Artsrouni designed a storage device that could be used to find the equivalent of any word in another language. In 1933, Peter Troyanski, a Russian scientist, conceived of a device that could help with bilingual and multilingual translation. His inventions included cards in four different languages, a film camera, and a typewriter. This invention is now considered ahead of its time, but Troyanski died of stenocardia while trying to finish this work.
In 1947, research began on the use of computers to translate languages using cryptography techniques perfected during the war, Claude Shannon’s information theory, and speculations about the universal features of language. In 1954, Georgetown University and IBM conducted the first public demonstration to showcase the possibilities of machine translation in a bid to attract public interest and funding. Now known as the Georgetown-IBM experiment, they could to translate sixty Russian sentences into English using an algorithm that first translated the Russian words into numerical codes. The experiment was a success, motivating governments to invest in machine translation and computational linguistics. However, the translated examples used during the experiment were carefully selected and the system could not be used for everyday purposes.
🦙 The ALPAC report and its impact on machine translation
After the Georgetown-IBM experiment, there was an influx of money and interest in machine-learning projects around the world. There were also a lot of high expectations, even though research at the time mainly consisted of trial-and-error approaches. Some of the earliest systems were made up of large bilingual dictionaries consisting of entries for words of the source language with its equivalent in the target language. Other systems were inspired by the recent progress in linguistics and used bilingual glossaries and a computer program to translate texts. Because of the political temperament at the time, most of the research was limited to Russian-English language pairs in both the US and Russia and increasingly became more theoretical.
By the 1960s, it had become obvious that research in machine translation was a slow and gradual process, leading to disillusion among the US government sponsors who set up a committee made up of seven scientists to investigate machine translation research. The Automatic Language Processing Advisory Committee (ALPAC), released a now famous report in 1966 claiming that machine translation was slower, less accurate, and twice as expensive as human translators. They recommended that tools should instead be developed to help human translators do their jobs better.
The report essentially killed machine translation research in the US and Russia as machine translation was then considered a failure. For over a decade, there was little research done on translation except in Canada and Europe, where research was motivated by cultural demand, not politics. In Canada, work began on the TAUM project (Traduction Automatique de l'Université de Montréal), a transfer system for English to French translations that translated weather forecasts. The Q-System formalism was created during the course of the project, and it is the basis of the Prolog programming language, which is still used in Natural Language Processing today.
🌍 From Research Endeavors to Real-World Application
By the 1980s, interest in machine translation research in the US was back on the rise after some degree of success in other countries. Systran, one of the few machine learning companies still operating in the US, was also one of the most successful systems so far. Originally built as a ‘direct translation’ Russian-English system that depended on mainframe technology, it now has systems for other languages in the European Union and has been installed at NATO and the International Atomic Energy Authority. Systran was one of the earliest commercial systems designed for general application, quickly followed by Logos, which operated in German-English and English-French pairs.
By the end of the decade, the accessibility of microcomputers and the abundance of resources had translated into a boom in both commercial and tailor-made systems. Citicorp, Ford, and the Canadian Department of Employment and Immigration all had their internal machine translation systems. Companies in Japan were also using machine translation to translate Japanese, Korean, and Chinese into English in a bid to become more competitive. Electronics companies like Toshiba, Sharp, Mitsubishi, and Panasonic were able to create low-level systems that were often restricted to a particular subject area.
🧠 Neural Systems and the 2000s
Most research into machine translation in the early 2000s revolved around statistical machine translation and example-based machine translation. Statistical machine translation, in particular, was preferred because it was not as expensive as the rule-based translation method that was used previously and was also more efficient. These systems were able to use large volumes of data to the most probable translation for a given input—the more text imputed, the better the translation.
In 2014, the first paper proposing using neural networks in machine translation was released. The following year, the first neural machine translation system was developed. Since then, the use of neural networks for machine translation has grown significantly as a more efficient and reliable machine translation method. Using neural networks, the model is able to “learn” from available data and translations, including its own translation, making it faster and more cost-effective than previous generations of machine translators.
🤖 The state of machine translation today
When Google announced its zero-shot translation system in 2016, it was the first time that transfer learning in language pairs that had not been fed to the system had worked in machine learning. By extending their translation system, researchers could use a single system to translate between different languages, including pairs that hadn’t necessarily been shown to the system.
Shortly after, in 2018, Meta (the company formerly known as Facebook) launched its No Language Left Behind initiative to fund and carry out research into low-resource languages that were being overlooked in machine translation research. As part of this initiative, Meta researchers created datasets and machine translation models for over 200 languages, helping to create more global access to speakers of lesser-known languages. By this time, more research was focused on a wider range of languages, resulting in the rise in the popularity of multilingual neural machine translation. In 2019, researchers at Google and Bar-Ilan University published a paper on massively multilingual neural machine translation in which they trained a single NMT model that could translate over 100 languages to and from English.
In August this year, Meta announced their SeamlessM4T transcription model, which is capable of speech-to-text, speech audio, and text-to-speech translation for almost 100 languages. Dubbed “the Universal Speech Translator,” Seamless M4T also boasts the largest open multimodal translation dataset to date, about 270,000 hours of speech and text alignment, which is publicly available under a research license.
📈 The future of machine translation
Machine translation, like language, has evolved and changed since George Artsrouni’s storage device in 1933. One of the areas that has seen the most growth since the beginning of machine translation research is the edit distance score, a metric used to indicate the amount of post-editing required. It is expected that the score, currently at 20% - 40%, will continue to decrease as machine translation technology develops and grows. This will also mean an improvement in speech-to-text translations as well as transcriptions.
One of the current challenges with machine translation is with localization, especially for languages and dialects that have a limited amount of resources or data. This is an area that requires a considerable amount of research, especially when considering the way context and culture vary from place to place. English, which is probably the most researched language on earth, also has dialects, slang, and linguistic variations that make localization difficult. Although some research is being done to utilize machine translation in the localization of low-resource languages, this might not be feasible any time soon. With some more research and development, this would be a more realistic goal in the future.
Note: If you like this content and would like to learn more, click here! If you want to see a completely comprehensive AI Glossary, click here.
Unlock language AI at scale with an API call.
Get conversational intelligence with transcription and understanding on the world's best speech AI platform.