The past, the present and the future of AI

18. April 2019y

The other day I came across an article on MIT Technology Review in which, after analyzing 16,625 papers on the future of Artificial Intelligence, the authors predicted the end of the deep learning era. I think a lot about AI on a daily basis — part of the job description — but after reading that, I really did stop in my tracks to reflect on the subject. When the deep learning era comes to an end, which I no doubt believe will happen, what will follow?

Over the course of the last couple of years, we saw huge technological advances in AI, particularly in the areas of natural language processing, computer vision, and robotics. This is due mostly to the success of machine learning, the technology that allows machines to learn from data and to improve their performance based on experience.

But are these advances bringing us any closer to reproducing human intelligence? What will the coming years bring us? And what challenges will we face next?

From Dartmouth to HAL 9000

The HAL 9000 computer from Arthur C. Clarke’s and Stanley Kubrick’s “2001: A Space Odyssey,” is the archetype of an artificial intelligence. It is endowed with human-like skills, such as understanding language, drawing up strategies to achieve a goal, gathering data from its surroundings and making decisions based on these data. HAL 9000 was conceived in the 1960s, amid optimism in the wake of the Dartmouth Summer Research Project on Artificial Intelligence. The conference of 1956 brought together Allen Newell, Herbert Simon, John McCarthy, Marvin Minsky and Ray Solomonoff, among many others, marking the beginning of AI as a field of scientific research.

Despite the initial optimism, AI has been on a turbulent ride. It was commonly held that a few years would be enough to develop technologies capable of recognizing people, understanding human speech, and translating between any languages. But these expectations ended up leading to a period known as AI winter, during which research funding suffered large cuts.

Decades later, AI winter was over and the optimism bloomed.

AI gold rush

Today, we use AI algorithms on a daily basis. For example, whenever we use the internet to do research, when we resort to an online translator or when we get a book recommendation from the website we usually purchase books from. Stock transactions are made by algorithms in a matter of milliseconds. Pattern recognition algorithms are becoming more and more popular in medical imaging analysis. Big companies like Google, Facebook, Microsoft, Amazon, and Uber are developing autonomous vehicles, digital personal assistants, dialogue systems, and automatic translators, storing huge amounts of data and resorting to automatic learning techniques. We are witnessing a true “gold rush,” with especially the US, China, Canada, France and Europe making big strategic investments in AI to accelerate progress.

One of most desirable characteristics in AI is the ability to make complex decisions. This was precisely the object of study of the first works by Herbert Simon (1978 Nobel prize in Economics), to whom we owe the principle of bounded rationality, whereby a decision process must take into consideration the limitation of available information, the cognitive limitation to process said information and the time limit to decide. We are currently observing successes in this area in controlled environments, such as games with strictly defined rules. The AlphaGo system, through reinforcement learning techniques, beat the best human players at Go; a historical mark we thought was still decades away from happening.

A bigger challenge is to leave these simulated environments and build machines capable of making decisions “in the wild,” based on observations they make of the real world. When this is possible, beyond the industrial robotics we know already, we will have a broad set of professions that can be assigned to machines: doctors, engineers, judges and financial analysts. This can be expected to happen in the next decades.

All-knowing personal assistants

One of the oldest Utopian ideas related to AI is automatic translation: a machine’s ability to translate between any pair of languages, crushing all language barriers and mediating communication between humans. This area has seen a notable evolution in the last years, thanks to deep learning techniques such as neural networks. While it’s still not possible to automatically translate a book with the same level of competence as a human translator, a lot of content that is translated by machines, like news or emails, shows a quality that’s much superior than it was five years ago.

In the near future we can expect advancements in natural language processing (including speech recognition and synthesis, semantic information extraction and dialogue systems) to be integrated in personal assistants: gadgets capable of communicating with us, of managing our daily schedules and looking up information online. These gadgets will know everything there is to know about our tastes and preferences and will soon become indispensable.

Forms of intelligence

To make predictions about a more distant future, we need to think of a broader, less anthropomorphic view of “intelligence.” Is biological inspiration the necessary condition to create an AI? Overall, we tend to face the future of AI in the light of what we know about human intelligence, but is that the only possible form of “intelligence”?

Let’s have a look at aerodynamics: even though birds’ flight was the inspiration to create flying devices, airplanes don’t flap their wings like birds do. Following the same path, it might be possible to build “intelligent” machines without trying to replicate brain mechanisms. Kinds of “intelligent behavior” might emerge in systems with multiple agents: faced with the need to cooperate to solve a problem, these agents automatically develop communication protocols to exchange amongst themselves. What language do these machines speak? What’s in common between this artificial language and the human one? Which one will emerge as the more favorable to obtain an intelligent behavior, a symbolic language like ours or “continuous representations,” unintelligible to the human ear? Is it possible to mediate between these internal representations and human language with the goal of achieving interpretability?

Unfortunately, we still don’t have one handbook that can guide us through AI’s possibilities and limitations, while at the same time providing us with tools for disruptive advancements.

In other words, AI has just left its pre-historic phase and is now in its “Ancient Age”: a period marked by extraordinary collective enterprises, such as the Great Pyramids of Giza (circa 2500 B.C.), but also by relatively rudimentary techniques. Historians estimate that the Great Pyramids were (frantically) built by around 10,000 workers in 3-month shifts over the span of 30 years. The number of scientists and engineers that nowadays work in AI certainly surpasses that number. The computational effort, measured by teraflops and dissipated energy in gigantic data processing centers, surely rivals the human energy dispensed in piling up the stone blocks of the pyramids. However, the techniques we use today in AI seem just as rudimentary.

Man versus machine

AI is increasingly impacting our daily lives and its benefits are undeniable. Nevertheless, there are still key skills we need to master to unblock certain limitations of current AI systems. Unsupervised learning is one of them, as it is the only way a system could learn without human direction. Not only this, but it’s very hard to make predictions in a field where technology from older days is still on the way to reaching its full potential. There’s a pretty good chance that something disruptive will happen that points us in a completely new direction.

That’s the reason why I don’t believe machines will become “more intelligent than us” anytime soon or that we’re even remotely close to a communication as fluid as the one shown by HAL 9000. Despite the alarmisms expressed by Stephen Hawking and Elon Musk, that see in AI “the most serious threat to the survival of the human species,” it doesn’t seem plausible that the most imminent dangers of AI will come from a super-intelligence. On the contrary, they will derive from our non-preparedness and the misuse we will make of those technologies if we overestimate their capabilities and fail to understand their flaws and biases.

The post The past, the present and the future of AI appeared first on Unbabel.

About the Author

Profile Photo of André Martins
André Martins

André Martins is the VP of AI Research at Unbabel, an Associate Professor at IST, and a researcher at IT. He received a dual-degree PhD (2012) in Language Technologies from Carnegie Mellon University and IST. His PhD thesis received an Honorable Mention in CMU's SCS Dissertation Award and the Portuguese IBM Scientific Prize. His research interests include natural language processing (NLP), ML, structured prediction, and sparse modeling, in particular the use of sparse attention mechanisms to induce interpretability in deep learning systems. He co-founded and co-organizes the Lisbon Machine Learning School (LxMLS 2011--2019). He received a best paper award at ACL 2009 and a best system demonstration paper award at ACL 2019. A. Martins recently won an ERC starting grant for his DeepSPIN project (2018-23), whose goal is to develop new deep learning models and algorithms for structured prediction, for NLP applications.

DeutschFrançaisNLdanskSvenskaEnglish