Bat-winged humanoids, blue-skinned goats, bipedal tailless beavers, a temple made of sapphire — these were some of the sights witnessed by Sir John Herschel, a well known British astronomer, when he pointed his telescope skywards from an observatory in South Africa. Or, at least, that’s what the New York Sun reported, in a series of stories back in 1835.
The reports raised such a furor that the paper’s circulation more than doubled from 8,000 to over 19,000 copies. Suddenly, everyone who had read the reports believed there was a thriving colony of bat-men on the Moon.
Except for one minor detail, none of it was true. The stories had been fabricated by Richard Adams Locke, the Sun’s editor.
The incident went down in history as the Great Moon Hoax, but if all of this were to happen today, we would call it something else: fake news!
Donald Trump may think he invented the term that has surged in popularity over the last few years, but that’s not even close to being true. Fake news has been around for many years, but we’re worried about it now more than ever. How’d we go from the mostly harmless Great Moon Hoax to the very serious threat posed by the Pizzagate scandal?
Well, the speed and scale of the internet made it a hundred times worse. As the New York Times explains, in a three-part film series released this November, disinformation spreads like a virus.
So how do you stop this virus from spreading?
If you ask Mark Zuckerberg, Facebook’s CEO, he’ll probably repeat what he told the American Congress earlier this year: Artificial Intelligence will fix it.
But is it true that AI can fix fake news? Will it be possible for a machine to verify automatically whether something is true or false? And more importantly, can AI fix a problem that it helped create in the first place?
AI, we have a problem
Much has been written about artificial intelligence and disinformation, but we can’t say enough about AI’s role in the production and distribution of fake news around the world.
Ariel Conn, Director of Media and Outreach at the Future of Life Institute, elaborates: “When AI researchers who have created programs where they can modify videos that it looks like someone said something that they didn’t say, it makes us think about the ethical implications of this technology.”
Take this video of former US President, Barack Obama, for instance. It was produced as an experiment by researchers from the University of Southern California for an episode of RadioLab called “Breaking News.” If you saw this on your computer, or on your phone, would you believe it?
The RadioLab episode aired in July 2017. Fast forward to April 2018, only a few months later, and you get an improved version, this time published on Buzzfeed as a public service announcement. Turns out: Obama didn’t call Trump a “complete dipshit.” The Oscar-winning filmmaker Jordan Peele did — in his most convincing Obama voice. The fake audio had been convincingly layered over genuine video footage of the former president.
The algorithmic machine learning technology that has been used to create these videos is called deepfakes, and it allows anyone to create a highly realistic simulation of pretty much any human being (as long as you have video and audio recordings).
But the real problem with deepfakes is when people stop using this technology to add Nicolas Cage to random movies and start producing videos of political leaders saying things they never did, as we’ve seen above. As Ariel Conn puts it: “It’s going to get much easier to manipulate people, and that’s extremely concerning.”
So where does AI stand in all of this? Is it the cure or the disease?
Fixing broken news
To know whether Artificial Intelligence can help us fix fake news, we first need to understand how machine learning works.
Machine learning is the science of getting computers to learn from data, identify patterns, and make decisions with minimal human intervention. The idea is for machines to improve their performance over time and become increasingly autonomous.
So, in order to apply this to our fight against disinformation, we would need to develop technology that analyzes data to correctly determine whether a piece of content is true or false. However, according to the experts, including Facebook’s European Director of AI Research, Antoine Bordes, that’s easier said than done:
“If it’s about recognizing something odd in an image, that’s something a machine can do, but if it’s about interpreting whether a text is true or false, that’s much, much, more deep. It’s much more complicated to flag and that’s not yet something a machine can do.”
Why? Because machines lack basic human skills like “understanding, common sense, and being able to put things in context.”
Nonetheless, this doesn’t mean researchers aren’t working tirelessly to solve this issue. The number of studies on this subject has increased significantly over the past couple of years, and some have actually proven to be quite fruitful.
The best AI for spotting fake news
In the MIT Technology Review, Preslav Nakov, a senior scientist at Qatar Computing Research Institute and one of the researchers on a new study about media outlets’ trustworthiness, writes that, in spite of all the skepticism, he’s optimistic about the use of machines to spot fake news. He predicts all the same that that probably won’t happen anytime soon.
In the study that Nakov helped conduct, they were training their system using variables that machines could analyze without human intervention. They performed content analysis of headline structure and word variety; evaluated web traffic; and assessed the influence of media organizations by measuring engagement on social media.
At the end of the experiment, their best model accurately labeled media outlets with “low,” “medium,” or “high” trustworthiness only 65% of the time.
But analyzing the trustworthiness of media outlets and identifying fake news is a delicate dance. Journalists follow—or are supposed to, at least—meticulous methodologies and codes of conduct in order to produce the news we read every day. And no computer could ever grasp it.
Humans and machines of the world, unite!
“I believe AI is going to be more and more useful in spotting fake news, but we also need to step up the human game. In the end, it will be a team effort between machines and humans.”Antoine Bordes, European Director at Facebook AI Research Lab
Combining the best of human and artificial intelligence might be our best hope. Machines give us speed and scalability, while humans bring understanding to the table—and with it, the ability to consider context and nuance when evaluating the veracity of a text. This also allows us to feed more data into the system and improve its performance over time.
Nonetheless, it’s important to highlight the fact that we’re also talking about journalism, an incredibly human-centered activity that follows rigorous sets of rules to get to the truth. This is why fact-checking platforms are a big piece in this fakes news puzzle.
We spoke to Aaron Sharockman, Executive Director at Politifact, one of the most recognized fact-checking websites in the US, about the role AI will play in the future and the methodology that powers their Truth-o-Meter.
They have 11 full-time journalists working every day sifting through the most important stories in print or broadcast journalism for checkable facts.
“From there, the first thing we do is ask the speaker what’s your evidence that this is true? So whereas in the United States if you get arrested, you’re innocent until proven guilty. Here, you’re kind of guilty until proven innocent. If you say something, you should have the facts to support it, to back it up.”
They then rely on independent and expert sources who are willing to go on the record. “From there, a writer recommends a verdict or a rating. We have six ratings we use, between true all the way to pants-on-fire false, which is our biggest lies. But ultimately the writer doesn’t get to decide the rating. What happens is it goes to a panel of three editors, and they sit as the jury.”
Could AI ever replace that?
Well, a machine couldn’t possibly get its algorithms around it, but it could help make it more efficient and send that information out to a greater number of people, according to Aaron Sharockman:
“People will have to be always involved in fact checking, helping people understand what’s true or not. It’s ultimately a very person-centered system. However, that said, I think computers can help make the process a lot more efficient. So what I’m looking forward to is how can computers take my fact checker, one of our 11 fact checkers, and have the ability to double the amount of fact checks they write?”
Sharockman explains further:
“How can computers cut down on the time it takes to write a fact check from six hours to three hours? And then secondly, how can computers and AI amplify the reach of our fact check? So that works in two ways. One is the simple way, which is we publish a fact check, how can we make sure everyone sees it? Two is that misinformation continues to spread no matter what we do, how can the fact check try to stay close to where the misinformation’s spread? Whether that means it repeats itself on another blog, how can the fact check be tied to that blog? Is it a Twitter Bot replying to someone who posted a bad link on Twitter? So I think those are the things I’m most excited about. And I can see them in my horizon; I can feel like they will come.”
Catching up with reality
In the end, it feels as if we’re still catching up with reality. As the line between simulation and reality continues to blur, we need to know exactly where we stand on this technology and whether it’s going to dictate the truth or detect it.
Ariel Conn, from the Future of Life Institute, isn’t so sure we have an answer yet. “I have a feeling that this is going to be much like all the other threats we face. With cyber security it’s almost entirely a case of us catching up. But once we have a better feel what some of these threats are, we are able to create pretty good programs to protect us against cyber threats. So I think AI will be the same, where, especially initially, it will probably be catching up. But hopefully, it will get more proactive.”
Today, the fight over information is at the center of debate, and the damage is real. We might not be stoking fears of an alien invasion—I mean, I have a good idea of who would be the first to tweet about it—but we’re spreading lies that could have a serious impact on our lives, our democracy, and our future. Technology will only take us part way: we’ll need to check our own bad habits.