Brought to you by Unbabel

Azeem Azhar of The Exponential View talks to The Understanding

It was as an absolute pleasure sitting down with award-winning entrepreneur, analyst and journalist, Azeem Azhar.

Having successfully founded (2010) and sold (2014) PeerIndex, which used machine learning to large-scale social media graphs to make predictions about web users, Azeem's career has also seen him hold corporate roles at Reuters and the BBC, as well as serving as technology editor and columnist for The Economist, the Guardian and the Financial Times. 

Nowadays, most people will know him from The Exponential View, a weekly newsletter on technology and society that has become a must-open for tens of thousands of subscribers. Here's what others have to say about it: 

While populist politicians lower the standards of public discourse and cynically ignore science and shun data-driven debate, Azeem reminds us each week in Exponential View what the curious mind can learn.

—  Tom Glocer, Former CEO of Thomson Reuters

Still blown away by Azeem's newsletter. Excellent and useful aggregation of AI and 'exponential tech' news.

—  Mike Butcher, Editor-at-large at TechCrunch

The juiciest source I know for what’s new in the world of artificial intelligence, biotech, and the near future.

—  Kevin Kelly, Senior Executive Editor at Wired Magazine


After braving the London cold and trading tips on which brand of thermal underwear was the best (Uniqlo's HeatTech), we spoke at length about why machine learning is different than what's come before, how to compete with the Big 4 technology firms, the ethics of their increasingly powerful platforms, and what to keep an eye out for in 2018 and beyond.

I'd like to thank Ji-Hye Park of Impossible for letting us chat on camera in their London headquarters, my very first employer Spectrecom Films for the production, and Rui Gomes for editing.


Full transcript below.

Matthew — So you're increasingly known for your email newsletter, The Exponential View, but going back a bit before that, like, what's your background? Like, how did you, where did you come from?

Azeem — Yeah, you know, it's something of a medley. I started my career as a journalist. I covered the tech business for The Guardian and The Economist at the time of the running up to the internet boom. '97 through to '99. And in fact, I covered the Netscape IPO if you can remember what Netscape was.

And then I moved over and I've spent the bulk of my career in the startup and venture ecosystem working for a number of small tech companies. Some of them done quite well and some haven't done so well. And building my own startups, the most recent of which looked at how people behaved on Twitter, applied a bunch of machine learning. It was acquired a couple of years ago and then I moved and worked with the Enquirer and did a couple of other things but having spent seven years building a company and focused very narrowly, I decided I wanted to go a little bit broader and that's when I started to write Exponential View.

Matt — Okay, and that goes out on a weekly basis. Tell us a bit about what it is and how much effort or how much time does it take you?

Azeem — Well, Exponential View is really a labor of love. It's a weekly newsletter that covers technology and the range of technologies that I think are going to really make a difference over the coming few years. Things like artificial intelligence and block chain, gene technologies, clean tech.

The lens that I take, it comes out of a sort of C.P Snow’s Two Cultures which is for people who are technologists, I want them to be able to understand the political and economic and philosophical, ethical dimensions of what these changes mean and for people who come from the arts, I want them to understand those changes  in the context of the technologies and what's real and what isn't real.

Now, I have to say, it is a labor of love so it didn't start that way. It started because I just wanted to externalize my thinking but as it's gone on it's got some fans. It now drives itself and has forced me to think about kind of the audience a little bit more and what is the set of questions I want them to think about

Matt —  So you mentioned elsewhere that we live in the age of technology, what does that mean in a material sense, and why is it different than what's come before?

Azeem — Well I think if we look at the world's largest companies today, the biggest seven rate are all tech companies that the so-called GAFAs, and the Chinese internet giants like Tencent, that wasn't the case ten years ago, I mean ten years ago the world's largest companies were the smorgasbord of General Electric, Exon and so on, and if you looked at those companies, they all made their money through ultimately the extraction of oil from the ground or putting it into cars, or financing the purchase of these cars, and it was very much what we understood as the kind of modern industrial economy and if you look at the largest companies today, they're all technology businesses and actually the bulk of them, have at their core that relationship between consumers data and machine learning, so that's it from an industrial standpoint.

But you can also see that we live in the age of technology in the way that in which we live our lives. We spend more of our time connected through systems that are mediated by technology we spend more of our time using technology to extend our brains. We live our social life increasingly, or large parts of them through technology, and again if you go back ten or 15 years, that isn't how we used to do things. We would make an appointment to meet a friend at eight PM on a Tuesday night at the Crown and Goose pub, and we would be there about ten minutes late.

Matt — or printed out a Google maps instruction.

Azeem — Well, ten years ago probably an A to Z even, that's sort of ragged and stuck in the back of our bags. And those are the things that have really changed, and so when we look how to what's going to matter now, it is going to be a set of things that are driven by these technologies.

Matt — So what's different about artificial intelligence, or machine learning than the kinds of technologies that we were used to before?

I'm really glad you said the word machine learning there, when you talked about AI, because I think within machine learning, what we have is software or machines that learn from their experience, and they learn from that experience, and they get better, that is a distinct class of machines, our previous machines, whether it was the plow, or the water wheel, or the steam train didn't do that.

It also gives rise to a really particular industrial strategy, and that strategy is what I've called the lock in loop, and essentially, imagine the situation; so you have a product, that product uses machine learning in it and so when you put that product out to market; as users use it, through their use they feed the system data and the system improves from that data, and gets better, which will then make the product better, and will mean more users will use it, giving you more data and that becomes a virtuous circle, a lock in loop, through which you will then have a better product than people who've built that product without machine learning, and if you've got a great business model at the same time, you're spitting out profits on the other side, which allows you to invest in new products, new ways of defending your position in the market.

And so you get to this world where once you put machine learning into an industry, and you effectively build products around it, it becomes very hard to compete unless you use machine learning, and we saw that happening with with video games, in sort first person shooters, it's obviously happened in web search, it's happened in mobile, I mean if you think about Google's web search franchise, which is still growing up, it is the ultimate lock in loop, so much so that that original insights where they did those graph calculations to figure out reputation many years ago, drove usage, it drove revenues, it drove investment, then Google was able to go out to publishers, and say look, yourpages need to have this kind of structure in order to be indexed well by our website, and then off the back of that comes the advertising business and of course now this enormous advertising company.

The other businesses that has used a lock in loop really effectively of course has been Facebook. But Facebook has got a couple of other things going for it, because it's got, yes it's got the machine learning lock in loop, but also has a network effect lock in loop because all my friends are there. And then if we look at new industries, the car industry is is a great example because cars are going to become computers on wheels, and genuinely just computers on wheels, and so the need to get training data is so paramount for these vehicles, and of course, that day to in itself will become a moat to prevents competitors coming in.

Matt — So we talk about GAFA, the large Chinese technology companies, and obviously, they're all deploying machine learning across the board which is helping them develop these lock in loops, but doesn't that also mean that maybe now more than ever in the past decade, there's now a chance for new entrants to come in, if they are able to adopt that kind of, if they can basically glide past the big four, the big ten, do you think that, is that something that you've seen in investing?

Azeem — Well, there's always room for new entrants, because we're seeing febrile startup activity all over the place, we’re seeing the typical cycles as well all of you know valuations getting out of whack, and then coming back into place, and I think the start up environment is pretty dynamic.

The issue with having these large companies at the scale that they are is not just about whether they crowd out start up innovation, causal an entrepreneur can always find a niche. I wouldn't go out and try to launch, by the way, a generalized friends network as an entrepreneur. Put that one down low on your list, But there's always room for entrepreneurs to find a particular niche.

I think the issue with the the GAFAs is that they are becoming so large, that they are able to distort not just markets but societies and political power, and that's a point at which we have to start to ask questions about how we deal with them.

Matt — Do you have an opinion about how we deal with them? Is that a matter of regulation, or-

Azeem — Well I think if we're going to deal with them, it is probably less so much for regulation, and more a matter of law and social agreement because many of our regulations are already been adhered to by the GAFAs, and even if you think about monopoly, and anti-trust, a lot of our anti-trust law is predicated on the idea that a monopolist raises prices to consumers, but of course these products are free to consumers, and their priced by the way the cost to consumers is hidden somewhere in the externality, whether it's addiction to social media or whether it's having a non functioning democracy, these are costs that we have to pay, but they're not costs that anti-trust law looks at, and so that means that if we want to starts to make changes, it's going to require activity I suspect on quite a lot of fronts, and you're starting to see that, so I think the arrival of something like GDPR in Europe, which is a regulation that says essentially gives me rights over my data, and my privacy is something that might start to open little chinks to change the narrative within those companies, but as I walk around, and I talked to a wide variety of people – I sound like a lonely man in the street, wandering around, talking to strange people – but when I talk to people in the industry, whether I talk to left leading technologists, or venture capitalists, or analysts, there is a strong common thread, which is somehow these companies need to be regulated,  they need to have their tones of reference adjusted, so that they end up being more pro-social, and the things that we don't really have great models for this because history doesn't repeat, it rhymes, and we can't really just got back to Microsoft, or AT&T, or Standard Oil, and say this is what happened last time, I think we're gonna need to do something slightly different.

Matt — So Technologist and Wired founder, Kevin Kelly, once wrote that humans are the sex organs of technology. Is that something you're familiar with?

Azeem – I mean I'm familiar with Kevin and Kevin's phrase. He is a fantastic thinker, and I think if we go back to our relationship with technology, it is a really intimate one. Technology has changed us, allowing us to then change technology, and on we go. You just have to think about when we started to first figure out how to use tools, our very first flintswere only honed on one axis, and then a few 1000 years you start seeing people hone the flints in two axis, so they're getting more and more sophisticated, which requires a kind of different kind of manipulations in the brain, or if you look at the size of hominid brain volume, so how big is our skull. It was effectively up to about 500 cubic centimeters until a couple of million years ago when we started to harness fire and we then see over a period of a million and a half, our great brain cranial capacity tripled to 1500 cubic centimeters, that's kind of driven by fire, which allowed us to improve our calorie intake.

We talk about how mobile phones and pen and paper externalize some of our thinking, but we've also externalized some of our digestion so because we started to mashup tubers with these flints, we didn't need to do a lot of heavy processing of those sort of dense starchy carbohydrates, the way that bovines and monkeys have to do, so compared to other hominids our lower intestines are relatively speaking smaller than the lower intestines of animals, who haven't externalized that jolted technology. So that's how technology changes us, and the question is sorry, that's how we get changed by technology, and then the question's why do we continue to… Improve technology, and that gets driven by very very simple localized decisions, you know what, sometimes it's just easier to figure out how to store the food rather than kill it every single day, and so we figure out clay urns and all those sorts of things.

We make these decisions in a kind of intimate localized way, and then the next thing we know who we are where we are wondering if we're actually gonna become machines, or machines are gonna become us.

Matt — So there's a lot of debate about that with people talking about a terminator endgame, or the singularity, do you see a difference between the idea of intelligence, and consciousness, is that perhaps where we running off in two directions, kind of, anthropomorphizing these threats when maybe we shouldn't?

Azeem — I think we find ourselves in a really interesting time where… It's almost like the tide has risen, there were things only we did, and we were the center of the world and starting with Galileo, we suddenly realized that perhaps we're not the center of space, and then with Darwin, perhaps we're not the center of creation, and we still had discussions, very active discussions, I studied philosophy at university about the nature of consciousness, and the nature of the self, and the nature of dualism, and what neuroscience is starting to tell us is it's telling us that that space that is unexplainable is… Deeper and deeper in us, I point to my head, maybe I should pointing to my heart; deeper and deeper in us than then we thought ten or 15 years ago, in words we're explaining so much more of it we're explaining the neural correlates of consciousness, we're creating machines that can increasingly behave like humans.

No wonder we start to ask questions about what it what it means, is there a difference between intelligence and consciousness, I think manifestly there is, if you take a definition of intelligence as being the ability to pursue a wide range of strategies, to achieve some kind of objective, outcome, which would include changing that objective, and saying, "I don't do this now, I want to do this", then I think increasingly, we can see that there's a wide range of non human intelligence, and a wide range of artificial intelligences that we've created.

If we think about consciousness and I'm gonna be on really slippery ground here, but we think about consciousness as our awareness of our own subjective experience. That is a much, literally, a heart problem, how does that come about. I don't think we've got a great model for arguing that any machine we've built has it, but I don't think we've necessarily got a good bottle for saying that machines can't ever have it, and I'm a little bit of a fan of this idea of consciousness emerging out of the complexity of a system, and its ability to process information, that's called integrated information theory,  it is sort of a bit like saying that everything's got consciousness, and that… Dumber things have less consciousness than more complex things, and if that is the case then I can imagine us on a separate path to artificial intelligence, involving some kind of artificial consciousness.

Matt — Some of these are huge challenges that come with these new technologies, and as they get more advanced, what do you see as some of the key areas that we should be focused on, where should our attention be?

Azeem — Well, I think there are four things that I tend to think about, the first is that there does appear to be an increasing rates of change in lots of different domains; the second is that we have a really strong focus on efficiency and effectiveness; the third is that as a result of some of those dynamics playing out, we seem to see a a barbelling, a growth in inequality, and the final thing, which is another social dimension, is that there is also appears to be a growth in homophily, which is this idea that birds of a feather flock together.

Matt — filter bubbles in a way.

Azeem — Filter bubble, yeah, so let's talk about the homophily one because that's perhaps not so obvious, but we are so familiar with filter bubbles, we're familiar with the idea that if you go on Twitter, and you're sort of liberal or progressive, you won’t see comments from people who have opposing points of view and when you do, you won't be able to put up with them, so you'll start to shut them off, and the reverse is true, and the reason, one of the reasons why that happens is because we can now see lots of other opinions, and we can build communities that are not constrained by geography, and Lisbon, which is where Unbabel is based is a great example of that, price attracting lots of people who are not from Portugal, so they don't have that kind of Portuguese bacalhau eating ethic behind them, but they've come because of a shared set of values, and so there's homophily writ large, we see homophily as a kind of sorting process that happens in cities, where people with generally similar points of view, can afford to live in a particular area, and then people with different points of view start to live in different areas, and then, even when you start to get people with different points of view living in similar areas, you start to see some of the frictions that perhaps in the UK made themselves known in the Brexit referendum.

The problem with homophily is something that arises as we increase mobility, as we increase people's ability to not be rooted to the spot where they were when they were born, and I'm not sure if we realize whether weakness would be the case when we could implement things like more mobility, but it does seem to be very very much the case, and that's why in the US and the UK, you see that your zip code, matters an awful lot in determining and describing who you are, and I think that becomes an issue, and I'll give you a funny example that we, that we came across which was that there's some evidence that since the rise of online dating, and in particular, Tinder, there's been a growth in the number of interracial marriages in the US, and the reason is that you're able to just meet people based on however you feel, we can rationalize it one way or another but one thing that seems to be declining is the rate of inter politics marriages so that is Republicans marrying Democrats, Democrats marrying Republicans, because we are birds of a feather, who want to flock together.

Matt — Kind of a world without shared consensus.

Azeem — Well I think that's interesting right a world without shared consensus, so how do you go about..

Matt — Can you even have a society if we're all in our own little hyper partisan boxes?

Azeem — Well I mean, I think back to Margaret Thatcher's phrase, she said many years ago, there's no such thing as society there are only individuals, and that really comes back down to part of that neo-liberal mind frame, that really came to birth in the seventies, companies only exist for profit, people only exist for their own self advancement, and… it clearly your last question, we can't, but we also know that people still look for ways in which to… cluster together, and have shared experiences, and I think our experience right now with what's happening in, I can talk about UK politics, and a little bit about US politic, is that… There isn't much of a shared overlap, and there is an increasing polarization, but I'm not sure if that's necessarily the case.

Some of the factors that drove that are tied into the media business model, in the business model of advertising, and the business model of audiences, and they're also tied to political decisions, to underinvest in things in that generates social goods within society, and this isn't the the faults of the internet, the fault of Facebook, or a, a lot of things are, it's stuff that's been written about for many many years, Robert Putnam with his book, Bowling Alone, which was written 15 years ago, and was about the fact that Americans are not as convivial, and collegiate, as they used today.

As an aside, your Instagram feed has hust gone absolutely crazy, and proving your millennial qualifications, you immediately flipped over to what's up I'm getting likes. I am getting likes.

Matt — What would you think of the the power dynamics enabled by these technological systems seems like more and more wealth and power is being concentrated in the hands of a very few people, I guess is that a good thing?

Azeem — I think concentrated power is never a good thing, you really want to distribute it as far as you can, and what we have seen is as you say this sort of enormous power laweffect which means and very few people get a lot, and the people below them, get a little bit less, and eventually many of us live in the long tail, and it's what in a way the impact effect that Yuval Harari talks about when he talks about the sort of immortal godlike humans and a useless class because a handful of people will accelerate a away in their wealth, and their access to amazing health , then medicine, and experiences and we see this replicated time and again, if you look at bitcoin, there about 30 billion bitcoin addresses while we're interviewing this, about 2000 of them 2500 have ten million bucks or more bitcoin, but 2 million of them have a 100 dollars or less, so there's a real power law there, it's not the old Galcian hump distribution that we're used to, but I think the dirty secret of this is that it's not really something that comes out as a result of technology.

By the way, Pharaonic Egypt was not exactly a fair egalitarian place, but when we look at some of the ancient based modeling of economies, and wealth, and which is sort of work for some complexity scientists have done.

You end up with a power law dynamic, you just do, and you can change your assumptions, you constantly end up with that dynamic until you start to implement in some form of redistribution, and that redistribution in general the one that is most effective is a tax on wealth, not a tax on income, not a tax on transactions, but tax on how much you have the end of each time period that is redistributed disproportionately to the people who have the least, and that is the only way that you can get a distribution of wealth that starts to approach fairness and it's not even this sort of mythical genie of zero, where we all have the same amount, but it's getting kind of better than that, now the trouble that we have is that's not how tax systems work in most countries, and in fact we're going the other way in many countries so the Americans are taking aim at the inheritance tax and death taxes which is actually the most efficient way to avoid equality spiraling out.

Now, does technology make that process faster?

Well yes, it's made everything faster, because we can move money around the globe much more quickly, you can be, you can move your talent if your talent is in demand, and so that sort of accelerates this process that is always happening underneath, so it's definitely an issue, it's an issue that's made significantly worse with what we can now do you with a sort of global set of technologies that we have in place. You know, one thing that people talk about is about the need to have some kind of UBI, universal basic income, the idea that there are certain classes of goods that we should provide as a social good in the same way that we provide policing and we provide parks and I find I mean I find UBI quite interesting I'm sort of not sure what the big fuss is on either side of the argument, and certainly not sure what the haters view of UBI is, other than it's a mechanism for redistribution. So fundamentally I think the question seems to be, do we want to have a society where people can participate for the benefits of the whole of society, including ourselves, and do we think that things like inequality are problematic from an economic perspective, and inequality seems to be problematic from an economic perspective because what it tends to do is it tends to take people out of creative productive environments into sort of maintenance, and sustaining mode, and it is often the case that societies that are very unequal tend to grow less fast than societies that are more equal.

And that's one issue, so one issue is just about thinking about these things in terms of if we have such things in society, we often ask the state, or whatever is gonna replace the state to fill in the gaps, the things that you and I, private contract won't ever mange, so cleaning the streets, making sure there's an emergency ambulance available, the police, the figuring out climate change, and what in my sort of cod economists way of thinking, thinking about things like inequality and equal access to participation is is very is very important, because those things are not being provided through the market mechanism, and I think there's enough work that's been done by even if the sort of the MIT media lab, which famously put the early versions of the so called 100 dollar PC in Kenyan villages, that barely had electricity and found that the kids were programming these things within two days.

It was all about their access and their ability to participate but I think the thing that's very hard about all of this, ultimately which people may not want to admit is that whenever you redistribute power you have to take it from someone, and give it to some other people, and it's as true as history that the people don't want to often give that up.

Matt — So a lot of our conversation today talked about the kind of the ethical considerations of this technology, what else is there that we should keep in mind?

Azeem — Well I think machine learning in particular is such a powerful technology, it is not value neutral like all other technologies, and what that means is that it comes to its use with a long history of…. biases and prejudices and standards that have changed, so we've seen many cases of machine learning systems going wrong, denigrating particular racial groups, being applied for example in bail assessments, and having a sort of a racial bias against them, we've seen from natural language… domain systems I love this particular one where someone showed that in a in a particular machine translation example you could go from Turkish that doesn't have gendered nouns and translate it into a language with gendered nouns, and it would translate doctor as male and nurse as female, and someone that wasn't an Unbabel thing as well because obviously you've humans to make sure that stuff works properly, but that kind of biases is systemic because we're learning from the experience we've had previously and so the problem that I see in machine learning is that if you simply throw data at it, it's gonna learn about the society that we had, not the society that we want to have. Homosexuality was for a long time classified as a mental disorder until quite recently, and now what's very,

What's good news in this space is that a lot of people recognize this and so there's not really great work happening at the Cheering institutes, and the Center for the Future Intelligence, both in the UK, and by AI Now in the US, and yet these people  are often scientists who're identifying the problem, formalizing what the problem is, coming up with methods to often resolve this at quite a deep and mathematical way, we haven't solved the problem, because it's not solved in of itself by having some great math, so having awareness about it, and we have to do more than that, but if we don't solve it, the problem is that everything we do is going to be is interfaced through devices, and these devices will have machine learning behind them, so these things will literally be baked into our systems. Where I'm a little bit less optimistic is that many of the decisions that get made, get made under by product managers, who are living the practicality of having to ship software, and so they gonna take short cuts, and so long as we are chasing growth, or chasing the next KPI, or OKR, whatever it is, my temptation to as a product manager just to throw out that check cause it's only gonna affect that lot of people is pretty high, and this is what we've seen with notifications on the smartphone where every product manager has read lean start up, and knows they have an Andrew Chad, they know they gotta get your attention, it's like throw in a notification loop, and now I'm getting 7000 notifications a day, there's tragedy of the commons emerging there, so I think we're doing some great work around ethics, and AI, responsible AI, people are aware of the issues, Cathy O'Neil's book was very important for this, and there is now much more formal work being done in good ways that is being funded, that is helping us come up with formal methods, the reality is these product's completely unregulated and anyone can produce them, technology's been perforated, and…

Do the people who are at the frontline have any incentive to slow down their rate of development in order to tick a box, which might be what how they think about? I'm not sure, so I think we have more work to do there, but more positive about this than I was two years ago.

Matt — So what what's coming up for you like what are you looking forward to in 2018?

Azeem — Well, in 2018, Exponential View will be 3 years old, and I would like to get to a milestone audience, so if I can get to 50 000 next year which is kinda double where we are today, that would be, that would be great. I really want to develop the ideas around the convergence of these different strands of technology, and we didn't talk about earth observation satellites, or lithium ion batteries, or genome sequencing, and I'd like to come up with a framework that lasts a bit longer to help us think through what these actually mean, and how we can participate to make sure the outcomes are positive for all of us.