Why are they all women?

29. Januar 2019y

Present-day digital assistants keep our diaries in order, give us directions and take care of our correspondence. They are, first and foremost, virtual secretaries, which means they are filling a role traditionally played by women.

Not so coincidentally, Amazon’s Alexa, Google’s Assistant, Microsoft’s Cortana, and Apple’s Siri all have one thing in common: they are female. Or at least they are as female as thousands of lines of code can be. Users know this. Gender cues, such as the voice, name and speech of a digital assistant, allow users to immediately infer gender.

You could be forgiven for assuming that most AI assistants are women, because their mostly male designers want them to be. A simple Google search quickly brings to mind the traditional image of a secretary, but that isn’t the whole story. Many big technology companies have carried out research showing the preference for female assistants. The data back them up. What is behind those data deserves investigation.

How gender theories of speech influence our technology

A great deal of research exists on the difference between male and female speech, much of which has been picked up by the media and popular science books. Robin Lakoff’s 1975 work, “Language and the Woman’s Place,” suggested a number of ways in which male and female speech differs. More recent research, by Deborah Tannen and others, has added to the work. The general assessment is that female speech is more polite, less direct, and more accommodating to the conversational partner.

If the user wants a personal assistant that is accomodating and polite, it shouldn’t be a surprise that in consumer research the preference for a female voice comes up time and time again. Whether this preference is based on fact or fiction, though, is cause for debate.

There are prominent detractors to gender difference theory. Janet S. Hyde reviewed 46 meta-analyses and found relatively small differences between the genders. Deborah Cameron, writes in The Guardian, “The idea that men and women differ fundamentally in the way they use language to communicate is a myth in the everyday sense: a widespread but false belief… Whether or not they are “true” in any historical or scientific sense, such stories have consequences in the real world. They shape our beliefs, and so influence our actions.” When technology companies survey their users, these are the beliefs users draw upon in order to answer.

The exceptions to the rule are curious in their own regard. IBM’s Watson assistant is used by healthcare professionals to assist with tumor assessment and cancer diagnosis, among other tasks. It seems that positioning of AI as an expert and an authority lends itself to a male persona; The research shows that both genders pay greater attention to male voices than to female voices. The choice by IBM to make Watson male perhaps shouldn’t surprise. In a matter of life and death, you want to give your AI the best chance of being listened to, even if that is based on a dubious premise.

Smart assistants in administrative or secretarial roles, historically female dominated, are awarded a female persona, perpetuating a gender stereotype for a new generation, while smart assistants in domains associated with men are awarded a male persona. The result is a present day reinforcement of traditional gender roles.

AI-Assistants-Women

Technology’s gender problem

Technology has a gender problem. Even at technology firms with hundreds of employees isn’t unusual for the number of female software engineers to be in the single digits. Speaking out about the problem, as many do, comes with its own risks. There isn’t much point in speaking anonymously to a newspaper when you are described as “A female employee at X company.” Such statements often narrow down the field to just a handful of people.

The manifestations of this gender disparity can play out in various ways. Some are trivial: “They didn’t understand why sitting on beanbags doesn’t work as a woman,” one female engineer told me. “It’s because we wear skirts,” she immediately clarified as if speaking to a colleague.

Other symptoms are more serious: some firms don’t have adequate disposal facilities for sanitary products. Critics argue too that if a narrow worldview exists in the company internally, it can end up being built into the products that it develops, just as it can be built into the company culture.
The Apple Health app, for instance, was roundly criticized because at launch it didn’t feature a period tracker, a feature that is essential for the 48% of users who are women — the same demographic that is underrepresented in the company’s workforce. The concern is that when products are created in a vacuum, they are created without consideration for how they impact the wider world and without critical engagement or interpretation of the data.

AI-Assistants-Women

The data problem

Corporate studies carried out by Amazon and Microsoft found a preference for the female voice in virtual assistants. A Microsoft spokesperson told the Wall Street Journal that, “For our objectives — building a helpful, supportive, trustworthy assistant — a female voice was the stronger choice.”
For many, a female voice evokes the qualities most want in a digital assistant: reliability, efficiency and, somewhat troublingly, deference. The reality is that people are not particularly polite to their AI. The evidence suggests that users of AI (but particularly children) give machines direct instructions lacking the usual “please” or “thank you.” If one of the most present influences in a child’s life become AI, the gender of that AI could have an impact on the way the child interacts with others. More research certainly needs to be done.

As for adults, that research is already coming in. A team of researchers in Stanford tested gender stereotypes in voiced assistants and found that male-voiced AI rated “more positively with respect to friendliness and competence” compared to female-voiced AI. The study suggests that any cue to gender — for example a name or voice — can trigger a stereotypical response.

Crucially, they found female-voiced computers in a dominant role were evaluated more negatively than male-voiced computers in the same role. In short, it’s easy to tell Alexa what to do because she is female. As soon as she started telling you what to do, you might want to make her male.
What was equally interesting in their research was that the test subjects denied being influenced by the perceived gender of the computer voices. Clearly, they were.

As the researchers explain, “by choosing a particular voice, a designer or engineer may trigger in the user’s mind a whole set of expectations associated with that voice’s gender. For designers and engineers to assume that any voice is neutral is a mistake; a male voice brings with it a large set of expectations and responses based on stereotypes about males, whereas a female voice brings with it a large set of expectations and responses based on stereotypes about females.”

Their findings reflect the corporate research done by the likes of Microsoft and Amazon. The choice of gender does have an impact, and it would be deliberately naive to think otherwise.

The researchers spell out the choice: “The decision to imbue a given technology with voice can therefore involve difficult choices. Designing technology that conforms to the user’s gender stereotypes may be the simplest way to meet his or her expectations about the technology. On the other hand, technology that challenges these stereotypes may serve to change, in the long run, the deeply ingrained biases that underlie the findings in the present study.”

It certainly seems that big tech is following the data, which means following the path of least resistance and following the bias.

The result is a reinforcement loop of historic gender roles and stereotypes, which show little sign of letting up. A child today may be used to both male and female secretaries in a way their parents were not, but if the secretarial function in their lives is an ever-present, ever-deferential digitally rendered woman, it wouldn’t be unreasonable to assume that she will grow up with a similar set of gender biases.

Technology creeps into our lives — through film, TV, ads. The gender of AI may at first seem a quirk, but it isn’t. By the time we engage with it on a daily basis, it’s hard to not think of it as normal. Yet engage we must, because not having an opinion is, by default, taking a side.

The post Why are they all women? appeared first on Unbabel.

About the Author

Profile Photo of Content Team
Content Team

Unbabel’s Content Team is responsible for showcasing Unbabel’s continuous growth and incredible pool of in-house experts. It delivers Unbabel’s unique brand across channels and produces accessible, compelling content on translation, localization, language, tech, CS, marketing, and more.

DeutschFrançaisNLdanskSvenskaEnglish