Einstein is to scientist as Messi is to midfielder.
Paris is to France as Tokyo is to Japan.
Jobs is to Apple as Ballmer is to Microsoft.
These are just a few of the many analogies that AI systems are able to complete accurately, thanks to breakthroughs in Natural Language Processing. In the past couple of years, research into systems that produce increasingly sophisticated representations of words has come into its own, with research groups releasing more and more powerful models that consistently beat the then-state-of-the-art system. With these techniques, computers are able to infer a lot about the world we live in, like Lionel Messi’s profession, or the company Steve Ballmer ran.
But what if there are some parts of our world that are better left out of these systems?
Because of the proverbial power-responsibility tradeoff, it is important that we take some time to discuss the ethical implications of these advances: namely, the perpetuation of human biases. Specifically gender bias, as it commonly exists in language, without going into whether language should or not be more gender neutral.
What exactly do we mean by bias?
Colloquially speaking, bias is defined as a prejudice for or against one person or group, typically in a way considered to be unfair. Bias in the machine learning sense is defined a bit differently, as an “error from erroneous assumptions in a learning algorithm.” In other words, a model consistently making the same mistakes.
When we talk about bias in NLP, we can actually talk about both kinds. The pre-existing biases in our society affect the way we speak and what we speak about, which in turn translates into what’s written down, which is ultimately what we use to train machine learning systems. When we train our models using biased data, it gets incorporated into our models, which allows our own biases to be confirmed, and preserved.
To better understand how this happens, we first need a basic understanding of how computer programs are able to process text, and the algorithms we use for that. In case you missed our article on how machines understand language, the short version is that words are represented by lists of numbers called word embeddings that encode information about the word’s meaning, usage, and other properties. Computers “learn” these values for every word by being given training data of many millions of lines of text, where words are used in their natural contexts.
Since word embeddings are numbers, they can be visualized as coordinates in a plane, and the distance between words — more precisely, the angle between them — is a way of measuring their semantic similarity. These relationships can be used to generate analogies.
In this example from the previous article, the orange arrows represent royalty, and the blue arrows gender, capturing the relationship man is to king as woman is to queen.
But what happens if we want to extend this analogy to other words, say professions?
Man is to computer programmer as woman is to _________
Common sense says that the missing term should be computer programmer because the term is not intrinsically gendered, unlike king and queen. Can you guess how the computer, with a standard word embedding system, fills in the blank?
Man is to computer programmer as woman is to homemaker (the second most probable word is housewife)
You can try your own analogies using this word embedding tool.
Machine translation offers another example. With some systems, translating the gender-neutral Hungarian sentences “Ő egy orvos. Ő egy nővér,” to English results in “He’s a doctor. She’s a nurse,” assuming the gender of both subjects.
These are obviously not the ideal outcomes. The training data used in the language model that produced the analogy most likely included men programming in the same linguistic context as women homemaking a lot more often than women doing anything else. The ideal outcome of the he’s a doctor/she’s a nurse conundrum is less black-and-white, but we could use a gender-neutral pronoun, give the user the option of specifying gender, or at least choose the same pronoun for both.
Machine learning systems are what they eat, and natural language processing tools are no exception — that became crystal clear with Tay, Microsoft’s AI chatbot. There is a general tendency to assume that more data yields better-performing models, and as a result, the largest corpora are typically web-crawled datasets. Since internet and other content comprises real, human language, they will naturally exhibit the same biases that humans do, and often not enough attention is paid to what the text actually contains.
Reducing gender bias
Eventually, at some point during this discussion, someone — anyone — will raise the question: If we want AI to be a true representation of humanity, should we even try to remove bias? Should AI be merely descriptive of human behavior, or should it be prescriptive? It’s a fair question; however, if nothing else, we also need to keep in mind that biased models are not just producing gauche analogies — sometimes they’re straight-up inaccurate: a female computer programmer is not equivalent to a homemaker.
Speaking as an AI engineer, we must always consider who will be using our systems and for what purpose. At Unbabel, we need to be mindful of our clients’ customers, and strive to deliver the most accurate and balanced translations. Having humans in the loop certainly reduces the risk of having gender-biased training data, helping to close the gap of where machine learning fails. But what can we, as engineers, do to reduce gender bias in NLP systems?
The most intuitive method is to modify the training data. If we know our models learn bias from data, perhaps we just need to de-bias it. One such technique is “gender-swapping,” where the training data is augmented such that for every gendered sentence, an additional sentence is created, replacing pronouns and gendered words with those of the opposite gender, and substituting names with entity placeholders. For example, “Mary hugged her brother Tom” would also create “NAME-1 hugged his sister NAME-2.” This way, the training data becomes gender-balanced, and also does not learn any gender characteristics associated with names. This would improve the analogies given by the model, because it would have seen computer programmers in male and female contexts an equal number of times.
However, it is important to note that this approach is straightforward for English, a language without productive grammatical gender, whereas for many other languages merely swapping pronouns like his/her and nouns like sister/brother is not sufficient, because adjectives and other modifiers express gender as well. For instance, romance languages, such as French, Portuguese or Spanish, have no neutral grammatical gender. As Helena Moniz, a linguist and researcher at the University of Lisbon explained, “languages derived from Latin lost their neutral grammatical gender a long time ago.”
To my knowledge, research into this kind of de-biasing technique for non-English corpora is largely unexplored.
Another method specific to machine translation that helps translations be more gender-accurate involves adding metadata to the sentences that stores the gender of the subject. For example, while the sentence “You are very nice” is gender-ambiguous in English, if the parallel Portuguese sentence was “Tu és muito simpática,” we would add the tag to the beginning of the English sentence so the model could learn the correct translation. After training, if we request a translation and supply the desired gender tag, the model should return the correct one and not just the majority gender.
If the Hungarian-English system was trained in this way, we could ask it to translate “ Ő egy orvos” and receive the translation “She is a doctor,” or “ Ő egy nővér” and receive “He is a nurse.” To perform this at scale, we would need to train an additional model that classifies the gender of a sentence and use it to tag the sentences, adding a layer of complexity.
These methods are effective at reducing gender bias in NLP models but are time-consuming to implement, as they require additional linguistic information that might not be readily available or even possible to obtain.
Thankfully, this topic is becoming a quickly-growing area of research. At the Annual Meeting of the Association for Computational Linguistics that took place this summer, which many AI Unbabelers attended, there was an entire track of paper presentations devoted to Bias in Language Processing, as well as the first-ever Workshop on Gender Bias for Natural Language Processing.
Google has also invested resources into mitigating this problem. In 0. Dezember 2018y, they announced that Google Translate would begin returning translations of single words from four languages to English in both the feminine and masculine form.
It’s great to see industry leaders tackling gender bias in algorithms, but the work is far from over. We’re still struggling with a lack of diversity in the AI industry — according to MIT Technology Review, “women account for only 18% of authors at leading AI conferences, 20% of AI professorships, and 15% and 10% of research staff at Facebook and Google, respectively” — and we can’t deny that this isn’t partially responsible for the problem. As engineers, we can’t shy away from the issue, hiding under the assumption that technology is neutral. Especially since the consequences of our inaction aren’t just anecdotal, like the examples we’ve shared — bias in algorithms can lead to discrimination in hiring processes, loan applications, and even in the criminal justice system.
This is not a feature, it’s a bug. And as our awareness grows, we must realize our role in creating technology that works for the many, not the few.
Sources
Mitigating Gender Bias in Natural Language Processing: Literature Review https://arxiv.org/abs/1906.08976
The post Gender bias in AI: building fairer algorithms appeared first on Unbabel.