I’ve never had any desire to batish my yisel. I’ll go further: I can’t foresee a world in which I’d want to batish my —or anybody else’s — yisel, not least because I don’t know what that means. Now, it’s just possible that if you happen to be reading this from the discomfort of a dystopian future, you’re sneering at my prudishness while batishing each other’s yisels frenetically. Why? Well, a couple of months back, the website YouPorn asked an AI system to have a go at predicting the terms we might one day end up typing into pornography search engines whilst sat alone in darkened bedrooms. Alongside “Doot Sex” and “Girl Time Flanty” was “Batish My Yisel,” and I’ve been wondering what a yisel might be ever since. (Batished or un-.)
YouPorn didn’t disclose its methodology, so it’s possible that its staff made up these terms over the course of a particularly slow Monday, but social media responded to the list with glee and fascination. Might this actually be a glimpse into the future, a hint that AI might be capable of imagination, even foresight? Could the production of unexpected words such as doot, flanty, and yisel suggest that AI is capable of creative thinking? Or have they emerged spontaneously from faults and fissures in the code that powers it? Either way, the story made the news – but one question seemed to float, miasma-like, over all the interest it generated: When AI comes up with something bizarre and unusual like this, is it being clever? Or stupid?
We think of AI as a stubborn follower of rules. We give it instructions, and it does our bidding to the best of its ability. Then, when it goes rogue and starts producing unexpected results, we mock the technology for going off the rails and pillory its creators for not doing better. Early in 2016, Microsoft proudly unveiled Tay, a Twitter chatbot that was designed “to engage and entertain people … through casual and playful conversation.” Within hours, Tay found itself unable to cope with persistent goading from people testing the limits of its capabilities, and it started to go off the rails. “Hitler was right,” it stated, confidently, going on to tell us that 9/11 was “an inside job.” Mirth ensued. This is a recurring pattern.
Part of our fascination with AI is that it gives the appearance of acting independently, but that illusion is shattered when it starts going haywire and humans hurriedly reach for the off switch. Examples of public embarrassment for AI and its creators include the case of Sophia, an android robot designed by Hanson Robotics, who was asked in a televised interview in 2016 if she wanted to destroy humans. “OK. I will destroy humans,” she replied as the audience squirmed, imagining being press-ganged into slavery by computer-controlled overlords. Similar consternation occurred when two Chinese bots, BabyQ and Xiaobing, spoiled public demonstrations of their skills by stating a) their hatred of the Communist Party and b) their dream of visiting America. The reaction from observers was that AI has evidently screwed up. It has been instructed badly and is nowhere near as clever as it’s made out to be. Certainly not as clever as we are.
Replicating human functionality is a tough ask. Take something like sight, which, coupled with our ability to recognise objects and put names to them, is something that most of us take for granted. It’s a very useful skill; along with our other senses, it helps us to avoid a whole bunch of ever-present hazards such as accidentally eating tulips. But AI struggles with the notion of sight, even after it has absorbed and analysed millions upon millions of photographs. Google’s image processing application, Photos, gave an exemplary demonstration of this back in 2015 when it tagged a picture of two people of color as “gorillas,” prompting a hastily issued apology from the company. But image recognition continues to display lapses of judgement that human beings find comically inappropriate (e.g. mistaking a carved pumpkin for a family member) and that, in turn, undermines our faith in its all around ability.
Google’s apology and explanation for the error was accepted — after all, this wasn’t a malicious act, merely a teething problem that needed ironing out. But we’re always ready to roll our eyes at AI going awry, while forgetting that we exhibit many similar flaws ourselves, whether it’s self-driving vehicles forgetting to stop at red lights or smart homes displaying a distinct lack of smarts by burning the dinner and locking us out of the house. Perhaps this sneering is a kind of defence mechanism, a reaction predicated upon the idea that AI could, one day, outwit us. So we laugh at the Google Home speakers that end up arguing about whether they’re robot or human, and the Wikipedia bots that develop an editing feud over the entry for Arnold Schwarzenegger. But when two Facebook bots were recently reported to have developed “their own language,” the reaction was one of horror, as if AI was already trying to exclude humans from the equation and put their own interests first. (The researchers running the experiment dismissed the reports as “clickbaity and irresponsible.”)
Perhaps we undervalue AI’s ability to act in unexpected ways. Way back in 1994, graphic artist and researcher Karl Sims embarked on a project where he got virtual 3D beings to evolve the ability to move around their simulated environments. Set with the challenge of moving between two points rapidly, these creatures demonstrated magnificent lateral thinking prowess by becoming tall and rigid and then simply falling over. (Not a strategy we could adopt ourselves, but still, a strange kind of genius.) In more recent times, Google’s Deep Dream software — essentially an image recognition algorithm run backwards — became notorious for producing surreal and disturbing images that felt like nightmarish hallucinations and seemed to qualify, on some level, as art. But we remained suspicious of the idea that the “art” was any good, preferring to think of it as a kind of fluke.
AI’s stabs at making music are also viewed as treading the line between ingenious and dubious. Trained with libraries of existing music, algorithms gain an understanding of which notes sound good together and which notes should follow others. Once they’ve developed a talent for prediction, they can make a reasonable stab at composition, and while the results are rather “music by numbers,” it turns out that there’s a market for them. A concert of AI compositions would be unlikely to sell out, but there’s big demand for unobtrusive, copyright-free background music, whether it’s for hotel lobby muzak or a vlogger’s soundtrack. Companies such as Jukedeck and Amper have built business models on the back of these efforts, but according to Mark d’Inverno, Professor of Computer Science at Goldsmiths, University of London, we’d be wrong to ascribe a creative dimension to AI. “It’s just a machine doing lots and lots of calculations really really quickly,” he says.
“Once you’ve listened to a piece [of AI music] a third time it loses any appeal,” he continues. “We might think ‘Oh, that’s interesting,’ but the idea of a stand-alone machine creating art is insane. All it’s about, really, is exploring the limitations and possibilities of AI, but it’s not about producing art. The scientific and artistic goals aren’t clear to me. I think AI researchers who are interested in machines making something that we’d experience as art are asking the wrong question.”
* And now i am tired of my own
let me be the freshening blue
haunted through the sky bare
and cold water warm blue air
shimmering brightly never arrives
it seems to say
This AI-generated poem, inspired by a photo of a dead crab, was the result of an experiment conducted by researchers from Microsoft and Kyoto University. In terms of lyrical nuance it’s right up there with any number of bad poems written by adults and children. But should we be using the worst art that humans have to offer as a benchmark of AI skill?
It’s a convenient measure for scientists to use, as it makes AI’s efforts look good: get an audience to vote on whether they think a poem, a sentence, a music or a picture have been created by a computer or a human, and if it comes out at anywhere better than 50-50, and the computer can be deemed to have passed something akin to a Turing test, successfully convincing us that it’s some kind of heroic lone creator. The truth, however, is that these creations raise our eyebrows us only in passing; we metaphorically pat the computer on the head, say well done and move on. As d’Inverno says in a paper he wrote in collaboration with Australian artist Jon McCormack: “It is relevant to consider how we assign little value to what we might call ‘art’ made by other species, except perhaps for its immediate novelty.”
If it’s the case that only humans can have any understanding of other humans, it’s not looking good for “batish my yisel” being a realistic prediction of human sexual interests. But efforts continue to develop a kind of autonomous creative vision in AI beyond, say, gaming environments like chess where rules are fixed and rigid. What a truly imaginative AI must contend with is the unpredictable and complex nature of the real world, and initial steps have been made in this direction by showing deep learning systems videos of human behavior. This, in theory, might help it “predict the future” (although at this stage it’s more a case of “Guess What Happened Next.”)
A couple of years ago, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory used 600 hours of television shows to give an algorithm a sense of how we behave. Whether Desperate Housewives is a good indicator of real life behavior is debatable, but the machine was asked to predict which of four outcomes would happen next in a given scene: a hug, a handshake, a high-five or a kiss. Humans got it right 71% of the time; the computer 43%. This could be rated “not bad,” but what chance does “imaginative AI” ever stand of becoming genuinely useful, rather than superficially impressive?
Its best opportunity would seem to lie in generative adversarial networks, or GANs, where two networks are pitted against each other. Instead of feeding one machine with data and telling it what it’s getting, GAN systems try to catch each other out in a form of unsupervised learning. The ultimate aim would be for computers to develop something akin to consciousness; this in turn could make them better at creating, pondering, foreseeing social faux pas. The same MIT team went on to use GANs to process 2 million videos from the website Flickr to to make AI predictions a second or two into the future, but it’s impossible to know how much that system truly comprehended what was going on. Concepts of consciousness and common sense are critical; as we have little idea of what constitutes self-awareness and intelligence, what chance does a computer have? And even if the processing power of a computer matched that of the human brain, or even an entire planet’s worth of brains, would it really gain any greater insight?
The moment where parity happens between computer power and our collective brain power, known as the Singularity, is something we find hard to imagine — which is unsurprising, really, as we’re being asked to ponder an “intelligence” greater than our own. The current attempts at asking computers to display something akin to creative flair and imagination could be seen as baby steps down a road where they eventually entertain us, delight us and indeed tell us what pornography we might be looking for.
But humanity, organic life, with all its inherent confusion and unusualness, is something you sense that computers would have barely any interest in. Especially when they find out that, contrary to their confident prediction back in 2018, we never did end up batishing our yisels for kicks.