Differentiating Among Different AI Risk Timelines and Probabilities

AI is kind of scary. We know this from news coverage, tech moguls, and our own imaginations. Robots may soon take away jobs. They also might become smarter than humans and kill us all. On top of all that, algorithms with disturbingly racist tendencies could enshrine our society’s prejudice in the cloud for ages to come.

But how serious should we take these threats? How certain or uncertain is each scenario if we remain on our current path? And how long will it take for us to start witnessing their payoff? Most accounts of AI risk don’t try to answer these questions. Instead, they conflate these and other different AI risks into a single generalized danger. Readers are left feeling afraid without knowing for sure what they are afraid of, how afraid they should be, and what they can do about it.

Take this Quartz article. Ostensibly, the subject of the article is the likelihood of large-scale unemployment due to automation. But near the top is an image of Will Smith in I, Robot; a little further down, you’ll find a close-up of a the menacing face of one of the evil machines from the flash-forward sequences in Terminator 2: Judgment Day.

Or, take the panel on AI and government regulation that I wrote about last week. For her first question, the moderator asked each panelist to comment on the likelihood that developing AI might put humanity’s existence in danger. In his answer, Tom Kalil briefly mentioned AlphaGo – a DeepMind-built program that defeated the world Go champion in four out of five matches – before pivoting to make points about automation and algorithmic bias. He didn’t directly address the question of existential risk or how AlphaGo, automation, and algorithmic bias were or weren’t relevant to it.

Potential existential danger, algorithmic bias, and mass unemployment are legitimate risks. But they have different causes and different solutions. They also are completely different as to their likelihood of ever happening, and the timeline by which we could reasonably expect them to transpire.

Take existential risk: the concern that a program will become as intelligent or more intelligent than humans and kills a lot of people. Of all AI risks scenarios, this is the one that sounds the most like science fiction, and for reasons that are not unrelated, the one with the greatest amount of uncertainty.  Today’s technology is not even close to human-level. AlphaGo is a monumental achievement, but it can only address the narrow task of winning Go matches. Engineers haven’t figured out how to build a machine that can strategize across scenarios to solve a variety of problems, and there is no evidence I’ve seen that they’re on the cusp of such a breakthrough.

The smart people like Elon Musk and Stephen Hawking who think superintelligent AI is likely to be developed relatively soon and could have potentially catastrophic implications shouldn’t be dismissed. There are actually a lot of really good reasons to be concerned about the potential of superintelligent machine killing everyone, and I recommend reading through this blog post on WaitButWhy if you’re interested in what those reasons are. But of all potential risks from AI, it’s the least likely to ever happen, and if it does, probably won’t happen for at least another 40 or 50 years.

Other than existential danger, the hot topic in AI risk is automation. The thinking goes that computers might soon be able to perform most jobs better than humans, and put most people out of work. I’ve written a lot about this before, and compared to existential risk, this threat has a much lower degree of uncertainty, and a much shorter timeline. There are many factors going into whether or not it will transpire, but those factors can be studied right away. Unlike with existential risk, we aren’t just speculating about the future impact of a technology that doesn’t exist yet, since many jobs could already theoretically have been automated away by existing technology, even if they haven’t yet.

For a balanced analysis of evidence for and against the likelihood of mass unemployment, I recommend reading The Second Machine Age, by William McAffee and Erik Brynjolfsson. If we are ever going to see mass unemployment due to automation, I personally think we can expect to start seeing it sometime between 2030 and 2040 – around the time that autonomous vehicles are most frequently hypothesized to become widely adopted. If we move beyond mass adoption of autonomous vehicles and unemployment is still below 5%, I’d say that will count as strong evidence for the inherent resistance of the economy to technological unemployment.

Of all three of these issues, I’ve studied algorithmic bias the least. That’s to my discredit, because of all three of these issues, this is the one with the least amount of uncertainty, and the shortest timeline. Already, poorly constructed algorithms lead to greater police harassment of people of color, and discriminatory practices in criminal sentencing and in the evaluation of loan applications. This is something that is happening today, and we should take steps to deal with it immediately. Read Weapons of Math Destruction by Cathy O’Neill to learn more.

There are many other risks that AI poses to the public interest: cybersecurity, weaponized autonomous drones, autonomous vehicle safety, to name just three. But the only similarity these issues hold is that they all stem from the same field of computer science. It would be helpful if public discussions recognized the differences in the kinds of risks AI poses. Reasonable discussion of the various threats and ways to mitigate them will help society. Vague, generalized fear of AI will not.