Why Facebook and Google’s New Multilingual Models are Great Steps for AI

oktober 27, 2020

Last week, Facebook announced a new machine translation model that translates directly between pairs of 100 languages without relying on English as a middle step. And just this week, Google announced MT5, a new multilingual language model trained on 101 languages. This news is significant because it’s a milestone step towards the long-standing goal of broadly available automated translation between any of the world’s languages. For the AI community, it indicates meaningful progress toward the development of a single common, baseline multilingual machine translation model, which can then be further adapted into any number of specialized models.

Here’s why this news is promising for the machine translation community and the customer service industry as a whole.

2020: A big year for language-based AI

Between Facebook’s M2M-100 translation model, and Google’s MT5, and OpenAI’s GPT-3 language models, there have been some big breakthroughs in AI this year centered around language. Many of these groundbreaking models have been released as “open source”, meaning they are freely available for both research and commercial applications. This is a big deal, since training these massive language models is extremely costly and requires immense computational resources. Very few companies have the resources to develop such models. But now, smaller expert AI companies such as Unbabel have the opportunity to adapt these existing, ”pre-trained” models to our own specific use cases. This has already resulted in major technology advancements with significant impact on our customers. For example, Unbabel has built its own translation quality estimation models for customer service-specific use cases on top of XLM-R and other pre-trained language models, and in turn has contributed our own machine learning research back to the community.

One of my favorite things about machine learning is that most major research in recent years is open-sourced for the greater good of the community. So, an advancement for one company is an advancement for all. Even though many major corporations have become prominent players within AI research, the academic spirit of sharing our learnings carries on.

Advancing toward a common machine translation model

When looking at advancements like M2M-100, it’s important to take into consideration that there’s still quite a lot of room to grow when it comes to translation quality. Facebook reported that translation quality, as measured using BLEU scores, improved after removing English as an intermediate language, sometimes by up to 10 points. That’s a good outcome. However, many of the direct language pairs covered by the M2M-100 model still have automated BLEU scores in the 10s or low 20s, which is indicative of a level of accuracy that is still insufficient for most practical commercial use-cases. For context, an understandable to reasonably good translation would typically score in the 30-40 range and get better as BLEU scores go up.

Beyond BLEU scores, our team at Unbabel is currently working on a new neural framework for measuring the accuracy and quality of machine translations across many different languages. This open-source framework, called COMET, is designed to predict human judgements of machine translation quality, at levels of accuracy unachievable by previous measurement methods. Many of the existing metrics, including BLEU scores, have limited value in accurately measuring translation quality once systems reach high levels of accuracy, as many of the latest state-of-the-art systems, powered by neural AI technology, currently do.

All of this talk of accuracy doesn’t discount the fact that Facebook combined significant, large-scale engineering efforts with new model architectures and training methodologies to create this advancement. It’s nice to see the machine translation community get a step closer to a technological solution for machine translation based on a single multilingual model that supports translation between more than 100 languages. Such a “pre-trained” machine learning model could be the basis of many further-adapted, improved machine translation models for different language-pairs and use-cases. For example, in customer service, a universal, multilingual translation model could be extremely relevant in cases where there is no English-speaking agent.

Humans need to remain in the loop

One thing that I expect to continue into 2021 and beyond is the need for humans in the loop for delivering multilingual machine translation. Humans in the loop are the best way to improve translation quality in enterprise use-cases. In addition, humans provide the critical feedback that allows us to adapt and improve machine learning models like M2M-100, MT5, and others. In high-stakes situations, such as customer service, AI augments human agents and makes their jobs easier and more efficient. Humans, for the foreseeable future, will remain a critical part of the last mile of customer support.

The post Why Facebook and Google’s New Multilingual Models are Great Steps for AI appeared first on Unbabel.

About the Author

Profile Photo of Alon Lavie
Alon Lavie

Alon Lavie is the VP of Language Technologies at Unbabel, where he leads and manages the US AI lab based in Pittsburgh, and provides strategic leadership for Unbabel's AI R&D teams company-wide.From June 2015 to March 2019, Alon was a senior manager at Amazon, where he led and managed the Amazon Machine Translation R&D group in Pittsburgh.In 2009, he co-founded a technology start-up company by the name of "Safaba Translation Solutions"​, and served the company as Chairman of the Board, President and CTO. Safaba developed automated translation solutions for large global enterprises that allowed them to migrate and maintain large volumes of content in all the languages of their markets. Safaba's approach focused on generating client-adapted high-quality translations using machine-learning-based technology. In late June 2015, Safaba was acquired by Amazon.For almost 20 years (1996-2015), Alon was a Research Professor at the Language Technologies Institute at Carnegie Mellon University. He now continues to serve as an adjunct Consulting Professor at CMU. His main research interests and activities focus on Machine Translation adaptation approaches with and without human feedback, applied to both high-resource language pairs as well as low-resource and minority languages. Additional interests include automated metrics for MT evaluation (specifically, the METEOR and COMET metrics), translation Quality Estimation, and methods for multi-engine MT system combination. Alon has authored or co-authored over 120 peer-reviewed papers and publications (Google Scholar h-index of 45 and i10-index of 122). Alon served as the President of the International Association for Machine Translation (IAMT) (2013-2015). Prior to that, he was president of the Association for Machine Translation in the Americas (AMTA) (2008-2012), and was General Chair of the AMTA 2010 and 2012 conferences. Alon is also a member of the Association for Computational Linguistics (ACL), where he was president of SIGParse - ACL's special interest group on parsing (2008-2013).In August 2021, at the 18th biennial Machine Translation Summit conference, Alon was awarded with the 2021 Makoto Nagao IAMT Award of Honour for his contributions to the field of Machine Translation.

DeutschFrançaisNLdanskSvenskaEnglish