Hillary Clinton's Narrow AI Take

Last week, worlds collided Hillary Clinton was quoted talking about AI. Per Jack Clark’s import AI newsletter:

First I felt a flurry of excitement that such a well-known political figure was calling attention to an issue I’ve been pining for well-known political figures to pay attention to. But after reading what she said (full transcript here), I was was vexed by how she discussed AI as a singular threat that can be met with a single policy. “AI” is a really broad technological concept with almost unlimited applications. Clinton talks about addressing it with a blue ribbon commission as if it were a comparably narrow problem to the growth in entitlement spending. This unfortunate way of thinking is very common. 

Treating AI like it’s a singular threat feels right to some people for a few reasons. First, it feels right to the uninitiated because it fits their vision of threatening AI as presented by Hollywood and furthered by news articles that reproduce still images from Terminator while theoretically discussing serious issues. It allows them to go down the familiar path of personifying AI.

Then there are those, many of whom are in the top 0.01% of AI expertise, who treat AI as a singular threat because there’s one potential threat that stands out as more urgent than all the rest. This group includes the many brilliant folks who are studying the existential risk of developing superintelligence. (I am not using “brilliant” sarcastically – they make a strong case and are much smarter than I am). They tend to think that this risk is so high and imminent that worrying about other AI threats is like worrying about the long-term impact of carriage horses on urban public health right before the car was invented. 

Then there are those who are horrified that we are enshrining our culture’s bias and exclusivity in the algorithms that are increasingly controlling our lives. Finally, as Clinton suggests, there are those who are certain that AI is soon going to supersede human abilities such that many people will suddenly lose their jobs.

But AI isn’t just one of any of these issues; it’s all of them, and more. The areas of policy for which AI has implications range from cybersecurity to warfare to transportation to employment and the safety net to policing and privacy, and yes, I believe at some point, existential risk. Smart consideration of the impact of AI should go beyond consideration of the machine learning algorithms and robots that are getting so much play on Twitter. It must be part of a more general discussion about the machine algorithms that are becoming central to our lives and the platforms that build and operate them. How is one blue ribbon commission supposed to come up with unified policy for all of that?

We don’t need an “AI law;” we need a massive uploading of knowledge and understanding about AI into the minds of people that are writing all the laws. I’m not sure how you do that. Maybe a really smart AI person might have an idea. Algorithms, and the AI breakthroughs that make them more valuable, aren’t a distinct area of policy. They are relevant to every policy.

In some sad way I’m gratified that AI policy would have been a focus of the Clinton administration. But she, and I suspect even the most forward-thinking public servants, still isn’t getting the scale of what’s happening.