In a futuristic city far, far away lives a utopian society of wealthy citizens who lead carefree lives, unaware of the workers who dwell underground and operate the machines that run the city above. One day, the son of the city’s founder and mastermind behind the subterranean machinery falls in love with a working class girl and decides to help her and the workers fight for a better life. Fearing he will lose his power, the father seeks out the help of a whacky scientist and builds a robot who looks just like the girl his son loves and is meant to help him fight a revolution against the working class, by inciting chaos throughout the city.
This could easily be the plot of a contemporary sci-fi movie, but it’s actually the story told in the 1927 film Metropolis, where artificial intelligence made its on-screen debut. The German silent movie featured a humanoid robot who was programmed to comply with the evil schemes of the two men who created it.
AI has since been a recurring theme in movies and TV shows, but its presence in fiction goes even further back.
According to the AI narratives report by The Royal Society, the oldest known record of something that remotely resembles AI can be found in Homer’s Iliad. In it, Hephaestus, the god of smithing, builds “attendants made of gold, which seemed like living humans.” The god is also credited with the creation of Talos, a giant bronze machine that protected the shores of Crete from invaders’ attacks.
In the centuries that followed, further tales of intelligent machines emerged that described living copper knights that guarded secret gateways or brazen heads that could answer any questions humans would ask them. While Hephaestus’s creations were more or less beneficial to people, these newer stories often ended badly, with either the AI being destroyed or humans being driven mad by it or ultimately being killed by their own creations.
No matter how far back in history you go, fast forward to today and you’ll find the narratives to be very similar. The very topic of machine rebellion against humans has been revisited time and time again in movies, such as 2001: A Space Odyssey or The Terminator.
The Royal Society’s report establishes further common themes in AI narratives. There is a tendency to anthropomorphize AI in a variety of ways, either by depicting the characters as metal versions of humans, like Star Wars’s C-3PO or the robot in Metropolis; by giving them distinct gendered human bodies as opposed to an androgynous figure, for example Ava from 2014’s Ex Machina or Westworld’s robotic hosts; and finally, by having robots show even the slightest recognizable human features, much like Wall-E and EVE.
Not only is this embodiment of AI an easier way to represent these characters in visual media, allowing humans actors to take on those roles, but it also helps the viewers identify with them.
Another common AI narrative is the visualization of either a utopian or a dystopian future where machines have overpowered humans or offer their only chance of survival. Lastly, there is a lack of representation of the different types of AI that exist in real life, with fiction focussing mostly on the types of AI with which humans are capable of establishing a connection. Think, again, of Ex Machina’s Ava or Scarlett Johansson’s voice in Her.
Despite movies and TV shows being fiction, they still contribute to how audiences perceive AI technology. Unless you work in the field, or are really, really interested in the topic, you’re much more likely to end up in the cinema on a rainy Saturday afternoon and watch a Sci-Fi movie with a catchy name or attractive leading actor, than to read about the latest research on machine learning and facial recognition algorithms.
Yet these perceptions are very often disconnected from the reality of state of AI technology. The Royal Academy’s report explains:
Exaggerated expectations and fears about AI, together with an over-emphasis on humanoid representations, can affect public confidence and perceptions. They may contribute to misinformed debate, with potentially significant consequences for AI research, funding, regulation and reception.
But again, this is fiction; its ultimate goal is to entertain the public and not to be a reliable source of information. However, if you actively read the technology, science or business sections of your news outlet of choice, you’re probably looking to find accurate, non-biased information on the topic. And here is where it gets tricky, as the prevalent angles and themes found in fiction often make their way into non-fictional texts.
A study on the long-term trends in the public perception of AI, by Ethan Fast and Eric Horvitz, analyzed articles and mentions of AI in the New York Times over a 30-year period, concluding that AI-related topics have increased since 2009 and tend to have a more optimistic tone. Yet there are specific concerns around AI that have been increasing.
A couple of years ago, a team of researchers at Facebook’s Artificial Intelligence Research unit released an article on the bots they were developing that could simulate negotiation-like dialogues. They would, for the most part, exchange coherent sentences, but would occasionally generate non-sensical statements.
The Facebook researchers soon realized the bots were generating sentences outside the parameters of spoken English, that they had failed to include in the software, resulting in the development of a machine-English language the bots started using to communicate with each other. This was considered a fairly interesting finding among the AI-research community, but not necessarily groundbreaking.
As reported by The Guardian, the story was picked up around a month later by Fast Company, who retold it under the title “AI Is Inventing Languages Humans Can’t Understand. Should We Stop It?”. The Sun also published an article about this, claiming that “experts [had] called the incident exciting but also incredibly scary”. Other headlines such as “Facebook engineers panic, pull plug on AI after bots develop their own language” and “Facebook’s Artificial Intelligence robots shut down after they start talking to each other in their own language” focus on the false allegation that the whole research was shut down because the bots had developed their own, non-human language. The bots were indeed shut down, but because the researches were interested in having bots who could negotiate with people, and the results were not the ones they expected.
The majority of reports on the Facebook experiment told the story around a fear-inducing angle in resemblance to fictional narratives that tell tales of the looming evils of artificially intelligent machines that are out to get us and exterminate all of mankind. Yet experts agree that it doesn’t seem plausible that the most imminent dangers of AI will come from a super-intelligence or that machines will become more intelligent than humans anytime soon.
That’s why they believe in the need for a change in current AI coverage by the media. Zachary Lipton, an assistant professor at the Machine Learning department at Carnegie Mellon University, spoke to The Guardian apropos the Facebook bots, saying there is a tendency to transform research into “sensationalized crap.” He is playing his part by maintaining a blog in which he tries to “counter-balance and deconstruct some of the most damaging sensationalist AI news.”
This disconnect between the reality of AI technology and the way it’s portrayed by the media, in both fictional and non-fictional narratives, is addressed in the Royal Society’s AI narratives report as well. Much like Lipton, the Royal Society has been trying to change the public conversation about AI to a more informed one. In 2016 and 2017, it put together the first public dialogue on machine translation in the UK. It brought together researchers and demographically distinct audiences in an effort not only to deliver accurate information to the people, but also to better understand what kind of questions they have and make AI communication more engaging.
Reality is stranger than fiction
The consequences of a misinformed public, the Royal Society says, go beyond the instilled, unrealistic fear of an AI apocalypse. The false dread surrounding a dystopian future can contribute to a redirection of public conversation to non-issues like robot domination, steering attention away from actual issues like privacy concerns over facial recognition algorithms or the perpetuation of gender bias and discrimination as a result of machines learning from biased algorithms. Not only that, but false fears in society can also lead to an over-regulation that suffocates innovation in certain sectors, like research around the development of fairer algorithms, and lack of funding.
Lipton agrees that people are afraid of the wrong things, and it’s not only the general audience but politicians and decision makers as well.
There are policymakers earnestly having meetings to discuss the rights of robots when they should be talking about discrimination in algorithmic decision making. But this issue is terrestrial and sober, so not many people take an interest.
On the other end of the spectrum, hopes of a utopian future where machines will, for example, free humans from the burden of work, can create false expectations of AI technology. Should it fail to deliver, it could damage the public’s confidence not only in AI, but in its researchers as well, deeming them less credible.
In fictional narratives, such delicate and complex issues are harder to turn into a captivating story, so they are often left out.
But fictional narratives are not all bad. Going beyond the immediate entertainment factor, they are capable of making us aware of topics we might not be inclined to think about otherwise. However, this doesn’t mean we should take what we see in movies, or watch or read in the news, without a grain of salt. We should all make a collective effort to seek out more information from varied, credible sources, and keep up with technological advancements in AI. If a machine uprising does happen, we might just have enough knowledge to make it out alive.