My 2017 So-Far Reading List

A few people recently asked me what I’ve been reading to stay informed on all this technology / public interest stuff I’ve fulminating about for the last six months. So today I present to you my a list of all the research papers and some of the blog posts I’ve consumed in 2017, with a summarizing thought about each.

As you can see, I’ve read a lot about artificial intelligence. But I don’t want to focus on AI exclusively.

What I’m interested in is how information technology, broadly speaking, transforms the way we live, work, and think. Information technology is a factor in many of the most glaring problems we face, including inequality, privacy, and polarization. I’m interested in AI insofar as AI represents the creeping growth of the impact information technology can have.   

Despite many papers on this list about superintelligence, it’s become a background feature in the landscape of my concern about the future. Implications of superintelligence are dire enough that we should still pay attention to it, but it’s a background feature because of the high level uncertainty that surrounds it, and the fact that many scientific breakthroughs are necessary for it to become realistic.

Foreground features are scenarios that have a little more certainty. The one area I’ve spent the most time trying to understand is the impact of algorithms on the economy: the potential for automation, skills-biased technical change, and the implications of value destruction in in software-disrupted industries.

In the next few months, I’m going to focus more on understanding the long term, systemic implications of algorithmic capabilities that either already exist or can be created without any theoretical scientific breakthroughs. And I’m going to ponder the kind of institutions we should look to build, given how IT has transformed the 20th century world everyone liked so much. This blog post from Stratechery from last year is an example of the kind of analysis I want to do, though I don’t agree with everything in it.

Without further ado, here’s my reading list sorted by topic:


AI, general

“Artificial Intelligence and Life in 2030,” Stanford University One Hundred Year Study Panel on Artificial Intelligence

This report, which was authored by over 20 AI experts in fields from computer science to law, gives an overview of AI technology and near-term AI policy issues. The authors take a tough line against fear-mongering about AI’s potential impact, which they think will lead to overregulation. And when regulation is necessary, they recommend government issue broad mandates, with strict transparency requirements and tough enforcement, as opposed to detailed rules.

Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies," Matthew Scherer, Spring 2016

Weighs the pros and cons of different approaches to regulating AI applications of all kinds: either more or less interventionist, or whether regulation by legislative, executive, or judicial means makes the most sense. Proposes the creation of an AI regulatory agency responsible for certifying AI systems offered for commercial sale as “safe, secure, susceptible to human control, and aligned with human interests.” Uncertified AI systems would not be banned, but subject to much greater levels of liability. I liked this proposal’s attempt to find a balance between promoting safety and hindering innovation. Extra credit to the author for thinking through a specific proposal, instead to the “raising of questions that need to be answered” or “descriptions of high-level principles” that dominate AI policy thinking.

“Big Data and Artificial Intelligence: The Human Rights Dimension for Business,” Official Conference Notes, February, 2017

Summary of a conference about AI and corporate social responsibility. Concluded that if industry is going to avoid government interference, they’ll have to come up with their own standards for ensuring that AI programs serve the common good and reflect values shared across different cultures. This dovetails pretty nicely with the “Artificial Intelligence and Life in 2030” paper’s advocacy for broad legal mandates that stimulate proactive self-regulation by businesses.

“Reconciliation Between Factions Focused on Near-Term and Long-Term Artificial Intelligence,” Seth Baum, May 2017

Among AI experts, there are those who are seriously worried about superintelligence ending humanity, and those who think superintelligence is barely more imminent than teleportation. The two groups seem to have contempt for one another. This paper wants everyone to stop fighting and be friends. I think that’s a good idea.
 

“Artificial Intelligence Policy: a Roadmap” (Draft), Ryan Calo, August 2017

An overview of ways AI could transform society in the medium-term and the questions government needs to answer before regulating it. Doesn’t contain recommendations for specific regulatory approaches. Raises the question of whether we should be concerned about the danger of superintelligence only to demean it as equivalent to “focusing on Skynet or HAL.”
 

Advanced AI, AGI, and Superintelligence

“Machine Super Intelligence,” Shane Legg, June 2008

Intelligence is notoriously hard to define, but in his dissertation, one of DeepMind’s co-founders takes a really good crack it it. His mathematical definition of intelligence (and AI) is extremely well-thought through and applies in almost any situation I can think of.

“Racing to the Precipice: a Model of Artificial Intelligence Development,” Stuart Armstrong, Nick Bostrom, Carl Shulman, October 2013

Nick Bostrom’s works actually are like what Christopher Nolan’s movies are intended to be but never are: abstract, theoretical, and logically sound. That makes his work really interesting, even if I wish it were a little more down to earth sometimes. In this case, he and his co-authors model the optimal conditions for multiple teams work on creating superintelligence while minimizing the chance of existential risk. Factors include the benefit of risk-taking vs. skill level in building AI; the level of enmity among competing teams; and the amount of information sharing among teams. Counter-intuitively, the more teams know about one another’s progress, the more likely one of them is to scrap safeguards, increasing the risk of a catastrophic accident.

“When Will AI Exceed Human Performance?” Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, and Owain Evans

Survey of computer scientists around the world working on AI about the progress of AI capabilities of certain tasks. Among its most interesting findings: Asian AI researchers were much more likely than North Americans to believe high-level machine intelligence (the point at which machines can perform any work task better than humans) is just a few decades away.

“AI Policy Desiderata in the Development of Machine Superintelligence,” Nick Bostrom, Allan Dafoe, Carrick Flynn, 2016

One of the only written attempts to answer the question, “how should our institutions address the possibility of superintelligence?” Lays out principles so broad that their relationship to any specific policy proposal is about as direct as the relationship between a rain cloud and the puddle outside my front door. Future of Humanity Institute is going to put forward some more specific ideas based on these desiderata shortly, and I’m excited to see them.

“AlphaGo and AI Progress,” Miles Brundage, February 2016

This is a blog post that examines AlphaGo, the DeepMind-developed algorithm that defeated the world Go champion in four out of five matches. It lightly critiques the common response that a machine Go champion was developed many years earlier than expected. In addition, this post first raised to my attention the fact that teaching and running AlphaGo demanded extremely high levels of power and computing resources, making its success seem slightly less impressive than many headlines had taken it to be.

“Some Background on our Views Regarding Advanced Artificial Intelligence,” Holden Karnofsky (Open Philanthropy Project), May 2016

“Potential Risks from Artificial Intelligence: the Philanthropic Opportunity,” Holden Karnofsky (Open Philanthropy Project), May 2016

These two blog posts lay out the case for why philanthropists should bother investing in long term AI safety. I thought it was very persuasive, and think it’s an important area of research, even if it’s not what I’m going to focus most of my time on in the next couple of months. I also thought author’s definition of “transformational AI” was more helpful than others like “high level machine intelligence” or “artificial general intelligence,” which analogize algorithms with humans.


AI & the Economy (Automation)

“The Future of Employment: How Susceptible are Jobs to Computerization?” Carl Benedikt Frey and Michael Osborne, September 17th, 2013

The most famous study on automation. Its conclusion that 47% of US jobs are at high risk of automation (even though that’s not really what the conclusion was) really sounded the alarm for a lot of people about the possibility that robots might take all of our jobs very soon. I think it’s a great study, but has been badly misinterpreted, as I’ve written.

 

“How Computer Automation Affects Occupations: Technology, Jobs, Skills,” James Bessen, October 2016

I read this paper only recently and I wish I had sooner. Most studies of automation assume that if machines can do something as well as a human, all humans doing that thing will be out of their jobs. This study is one of the only ones I’ve read that questions that assumption and investigates what’s happened to employment in occupations once their functions have become automatable. Read this if you are worried a lot about automation and you want to sleep better at night.

“Can Robots be Lawyers? Computers, Lawyers, and the Practice of Law,” Dana Remus and Frank Levy, November 2016

Deep dive into one of the categories of work that many people think will soon be lost to robots. The conclusion: no, lawyers will not be automated away anytime soon. According to their model, only 13% of lawyers hours would decline if all of the newest legal tech were adopted immediately. This was a good examination not only of how technology develops but how it can change the workforce, from the perspective of a specific job.
 

“Artificial Intelligence, Automation, and the Economy,” Executive Office of the President, December 2016

This paper got a ton of attention because, well, it was the White House basically saying, “AI is a real thing that will have an impact on the economy and we should prepare for that.” Its conclusions: implement the mainstream Democratic Party agenda while taking a “wait and see” attitude towards the possibility of even bigger policy shifts. That conclusion was a little disappointing but this was still a big deal coming from the White House.

 

“A Future that Works: Automation, Employment, and Productivity,” McKinsey Global Institute, January 2017

McKinsey’s analysis of the automation question goes a lot deeper than most. Instead of analyzing entire occupations and their potential for automation, it looks at the likelihood of automation of specific work tasks. This gives you more nuanced and measured view of automation than Osborne & Frey. Their headline conclusion was that 60% of jobs will have at least 30% of their tasks likely to be automated within the next several decades. I liked how this report provided alternate timelines for how quickly these changes might take place. I also liked its point that if we’re going to maintain economic growth this century, we need the productivity gains that automation brings. In other words, automation may be a good thing.

 

“Information Technology and the US Workforce: Where Do We Go From Here?,” Committee on Information Technology, Automation, and the US Workforce, March 2017

Assesses AI progress and lays out several hypothetical impacts on the workforce while raising further questions for research. I can’t possibly summarize every relevant point from this 160-page paper, but one tidbit that stuck out to me was its recommendation that we explore new data sources to better measure the pace of AI’s adoption and its impact on the workforce. It got me thinking about what kinds of naturally-occurring data might shine a light on whether automation is happening at all, and if so, at what pace.
 

“The Shift Commission on Work, Workers, and Technology: Report of Findings,” May 2017

Report of the Shift Commission, which was several groups of leaders in various fields getting together to talk about the future of work. The report surmises that there are possible scenarios for what the future of work looks like: 1) less work, mostly tasks 2) less work, mostly jobs, 3) more work, mostly tasks 4) more work, mostly jobs. I like how this report didn’t make a single prediction, but analyzed different scenarios that are all very possible.

 

Other Future of Work

“Recommendations for Implementing the Strategic Initiative Industrie 4.0," Federal Ministry of Education and Research (Germany), April 2013

Really dense German government report on the benefits of companies having their their production machinery, supply chains, shipping, headquarters, and all other physical assets networked, to allow for production of small-batches of goods and instantaneous decision-making based on granular data. Also discusses measures companies of all kinds need to make to successfully transition to such a system. My description is about as dense as the paper itself; please forgive me.

Portable Benefits Resource Guide, Natalie Foster, Greg Nelson, and Libby Reder (The Aspen Institute Future of Work Initiative), 2016

Explores ways for all workers in the economy to have health care and other benefits, even if they work as an independent contractor. Of particular interest are strategies for providing benefits to Uber drivers and other gig economy workers.


“The Role of Unemployment in Alternative Work Arrangements”, Lawrence F. Katz and Alan B. Krueger, December, 2016

Research paper describing the long-term increase of independent contractors as a share of the workforce – from 10% in the 1990s to around 16% today. Gig economy workers still comprise a relatively tiny subset here, though they are growing. Remember, UberX didn’t exist until 2013. That’s a crazy thought.  
 

Autonomous Vehicles

“Autonomous Vehicle Technology: A Guide for Policymakers,” James A. Anderson, Nidhi Kalra, Karlyn D. Stanley, Paul Sorensen, Constantine Samouras, Oluwatobi A. Oluwatola (RAND Corp.), 2016

Comprehensive analysis of AV technology, its implications and various attempts to regulate it around the US. Concludes that, due to positive externalities associated with widespread use of AV’s, at some point it might make sense for policymakers to align public and private costs of AV tech. But for the moment, thinks any aggressive regulatory action will do more harm than good.

“Fast and Furious: the Misregulation of Driverless Cars,” Tracy Hresko Pearl, 2016

Takes a deeper dive than the RAND paper into existing regulations of AVs and why many of them are problematic. Most problematic regulations stem from irrational fears about the safety of AV technology and ignorance of the different levels of autonomy classification, and different treatment they require.

 

Online Voting

"The Future of Voting: End-to-end Verifiable Internet Voting,” US Vote Foundation, July 2015

A big divergence on a reading list overwhelmingly focused on AI and associated technologies. But I read this report about online voting because I believe that eventually, if we’re going to have a fully enfranchised electorate this century, we need to allow people to vote on their smartphones. I can’t tell you how many friends I’ve talked to who said they would vote if only they could do so. This report lays out a framework for, if voting were to be brought online, exactly how it should happen. The critical aspect is that online voting systems need to be end-to-end verifiable -- where every person can independently verify that their ballot was counted accurately, without compromising ballot anonymity. The security challenges to making this happen are immense, but at some point they will have to be overcome.

Books, TV, and Music from August 2017

Books

The One-Straw Revolution, by Masanobu Fukuoka: Masanobu Fukuoka’s manifesto about farming, science, society, and how to live a good life. His thesis is that modern agriculture throws us out of balance with nature and creates more problems than it solves. He begins by describing his methods of "do-nothing" farming: eschewing weeding, plowing, pesticides and many other erstwhile essential activities, but still achieving as high a yield as any modern farm. Then, he expands into a wider commentary about the world. It's radical stuff, but I strongly recommend it if you like to think about how mindfulness applies to culture, society, and the systems we’ve built to prop them up. It was given to me as a counterpoint to Homo Deus by Yuval Noah Harari, but I found that they actually agreed upon a lot, which I’ll be writing more about soon. 

TV

Insecure, Season 2: Insecure shares similarities with Girls; it follows a group of young adult millennials around a famous big city and has a soundtrack that makes you think, how have I not heard this song anywhere before? And it’s at least as smart and funny as Girls was without ever being 10% as obnoxious or infuriating. I can’t believe Issa Rae didn’t get nominated for an Emmy. If you haven’t seen it, go watch, laugh, and learn. 

Music

Time (Tale of Us Edit), by Hans Zimmer: I played this track on repeat over four straight days, and I can’t remember the last time I did that. Hearing it tucked into Kidnap Kid’s Anjunadeep Edition episode in August made me re-watch Inception at the first opportunity. Still one of my favorite movies, and this gentle house rework of the soundtrack’s big number made me feel like what it was like to see it for the first time.  

We Don't Need Superintelligence for AI to be Transformational

In life, it’s a good idea to address immediate threats while keeping an eye on long-term risks. So I was glad to see a paper by Seth Baum entitled "Reconciling Between Factions Focused on Near-Term and Long-Term Artificial Intelligence." 

As Baum explains, there are two camps of people who are concerned about AI’s impact. The first, which Baum calls the “presentists,” believe that money and attention should go towards addressing the impact of AI that is already widely in use. The second, or the “futurists,” believe we should focus more time and energy on potential threats posed by AI capabilities that don’t yet exist, but when/if they are developed, will have a transformative and perhaps disfiguring effect on humanity. Presentists tend to be populated with more economists, elected officials, and legal scholars; futurists tend to have more philosophers and AI engineers than presentists. Baum tries to reconcile the factions by creating two new ones: “intellectualists,” who believe in creating AI for its own sake, and “societalists,” who believe AI should be developed for its impact on society.

I appreciate Baum’s attempt to bridge the divide. But I also feel like there’s a hybrid approach to thinking about AI that borrows from both presentists and futurists that focuses on long-term time horizons, but makes no assumptions that we’ll see any theoretical or scientific breakthroughs in the next several decades. Instead, this approach would analyze the long-term impact of technical capabilities that exist today, but haven’t yet been applied throughout society in a way that would allow their impact to be felt. I haven’t seen such an approach explored in any of the literature I’ve read.

I like the futurists’ focus on transformation; the Industrial Revolution showed us that technology can change society beyond recognition in a short space of time. And I expect information technology to transform society at least as much. But most futurist analyses of AI are based on risks associated with developing artificial general intelligence or superintelligence. And while I don’t dismiss this possibility, it’s extremely uncertain. It’s worth time and money to keep thinking about AGI, superintelligence, and different scenarios that could arise. But there’s nothing any policy or political actors can really do about it before it gets closer to reality. And thinking only about AGI and superintelligence discounts the possibility that very powerful yet narrow applications of AI could yet have a transformative impact.

So the presentists’ focus on the impact of narrow AI applications that already exist is appealing. But most presentists I’ve read or listened to shy away from all but the shortest time horizons. There is a lot of great scholarship on how to address the needs of gig economy workers and on how to reduce algorithmic bias. But in their focus on immediate issues, the presentists seem to preclude the possibility that AI might introduce deep changes to social organization, the mandate of government, and the human experience. To me, that is unimaginative and shortsighted.

I think that the impact of even just today’s AI capabilities on the way people work, think, and interact will be immense. Many such capabilities that have been developed in lab settings have yet to have their impact felt. We will see much larger changes as those capabilities move to the marketplace and seep through society. It would be interesting to see a study analyzing the 30-year time horizon of today’s state of the art in AI becoming cheaper, easier to use, and more applicable to everyday business and personal situations.

For example, can recent breakthroughs in deep learning that have gotten so much attention, like AlphaGo, be adapted to business-relevant applications? What would the ripple effects of this be? And in the future, will it be possible to build and run AlphaGo-comparable programs at a lower cost of manpower and hardware? This question could reflect a misunderstanding of how the technology works, in which case I’d love to be corrected.

Also, there will almost certainly be a greater long-term effect of existing AI capabilities on children who are growing up with them than there has been on adults to whom they were introduced later in life. What will be the psychological impact in adulthood on infants and children who grow up with voice-responsive smarthome technology like Alexa?

Moreover, as far as I know, many narrow AI capabilities can continue to advance without major theoretical breakthroughs. And as these capabilities are fine-tuned and expanded upon, I think we can expect to see even greater transformations. Everything about ourselves and everything that surrounds us can be defined by data, because everything, in theory, can be measured.

As the ability of programs to measure and analyze the world continues to increase, I think it will lead to not just new kinds of work and new kinds of entertainment, but to new ideologies, new forms of spirituality, and new ways of thinking about what it means to be human. And this can take place without the theoretical breakthroughs needed to create AGI or superintelligence.

Both presentists and futurists should keep doing what they’re doing; I think both have tremendous importance. But personally, I want to merge the future-looking orientation of the futurists while maintaining the presentists’ skepticism of superintelligence. By pursuing this path, we can come up with a vision for the future that is transformational, yet measured enough to minimize the chance of a big intellectual swing and a miss. We can be mindful of the possibility of radical transformations without banking on a specific technology isn’t close to existing yet.

 

Differentiating Among Different AI Risk Timelines and Probabilities

AI is kind of scary. We know this from news coverage, tech moguls, and our own imaginations. Robots may soon take away jobs. They also might become smarter than humans and kill us all. On top of all that, algorithms with disturbingly racist tendencies could enshrine our society’s prejudice in the cloud for ages to come.

But how serious should we take these threats? How certain or uncertain is each scenario if we remain on our current path? And how long will it take for us to start witnessing their payoff? Most accounts of AI risk don’t try to answer these questions. Instead, they conflate these and other different AI risks into a single generalized danger. Readers are left feeling afraid without knowing for sure what they are afraid of, how afraid they should be, and what they can do about it.

Take this Quartz article. Ostensibly, the subject of the article is the likelihood of large-scale unemployment due to automation. But near the top is an image of Will Smith in I, Robot; a little further down, you’ll find a close-up of a the menacing face of one of the evil machines from the flash-forward sequences in Terminator 2: Judgment Day.

Or, take the panel on AI and government regulation that I wrote about last week. For her first question, the moderator asked each panelist to comment on the likelihood that developing AI might put humanity’s existence in danger. In his answer, Tom Kalil briefly mentioned AlphaGo – a DeepMind-built program that defeated the world Go champion in four out of five matches – before pivoting to make points about automation and algorithmic bias. He didn’t directly address the question of existential risk or how AlphaGo, automation, and algorithmic bias were or weren’t relevant to it.

Potential existential danger, algorithmic bias, and mass unemployment are legitimate risks. But they have different causes and different solutions. They also are completely different as to their likelihood of ever happening, and the timeline by which we could reasonably expect them to transpire.

Take existential risk: the concern that a program will become as intelligent or more intelligent than humans and kills a lot of people. Of all AI risks scenarios, this is the one that sounds the most like science fiction, and for reasons that are not unrelated, the one with the greatest amount of uncertainty.  Today’s technology is not even close to human-level. AlphaGo is a monumental achievement, but it can only address the narrow task of winning Go matches. Engineers haven’t figured out how to build a machine that can strategize across scenarios to solve a variety of problems, and there is no evidence I’ve seen that they’re on the cusp of such a breakthrough.

The smart people like Elon Musk and Stephen Hawking who think superintelligent AI is likely to be developed relatively soon and could have potentially catastrophic implications shouldn’t be dismissed. There are actually a lot of really good reasons to be concerned about the potential of superintelligent machine killing everyone, and I recommend reading through this blog post on WaitButWhy if you’re interested in what those reasons are. But of all potential risks from AI, it’s the least likely to ever happen, and if it does, probably won’t happen for at least another 40 or 50 years.

Other than existential danger, the hot topic in AI risk is automation. The thinking goes that computers might soon be able to perform most jobs better than humans, and put most people out of work. I’ve written a lot about this before, and compared to existential risk, this threat has a much lower degree of uncertainty, and a much shorter timeline. There are many factors going into whether or not it will transpire, but those factors can be studied right away. Unlike with existential risk, we aren’t just speculating about the future impact of a technology that doesn’t exist yet, since many jobs could already theoretically have been automated away by existing technology, even if they haven’t yet.

For a balanced analysis of evidence for and against the likelihood of mass unemployment, I recommend reading The Second Machine Age, by William McAffee and Erik Brynjolfsson. If we are ever going to see mass unemployment due to automation, I personally think we can expect to start seeing it sometime between 2030 and 2040 – around the time that autonomous vehicles are most frequently hypothesized to become widely adopted. If we move beyond mass adoption of autonomous vehicles and unemployment is still below 5%, I’d say that will count as strong evidence for the inherent resistance of the economy to technological unemployment.

Of all three of these issues, I’ve studied algorithmic bias the least. That’s to my discredit, because of all three of these issues, this is the one with the least amount of uncertainty, and the shortest timeline. Already, poorly constructed algorithms lead to greater police harassment of people of color, and discriminatory practices in criminal sentencing and in the evaluation of loan applications. This is something that is happening today, and we should take steps to deal with it immediately. Read Weapons of Math Destruction by Cathy O’Neill to learn more.

There are many other risks that AI poses to the public interest: cybersecurity, weaponized autonomous drones, autonomous vehicle safety, to name just three. But the only similarity these issues hold is that they all stem from the same field of computer science. It would be helpful if public discussions recognized the differences in the kinds of risks AI poses. Reasonable discussion of the various threats and ways to mitigate them will help society. Vague, generalized fear of AI will not.

Questions and Suppositions about Natural Language Understanding From Someone Who Is Now Unsure of What "Understanding" Even Means

As a starting point for a research project about how advances in natural language understanding will affect the professional services industry, I’m trying to come up with a formal definition of natural language understanding and how it can be measured. So far, I’m stumped. This post doesn’t have a firm conclusion; instead, I’m going to describe the thought process I’ve gone through, which poses an open-ended and almost philosophical question: what does “understanding” even mean, and does it have different meanings for machines than for humans?

Up until now I’ve informally defined natural language understanding is a part “the ability of machines to understand you when you talk to them.” To begin my search for a formal measurement of machine capability at talking with people, I went to the Electronic Frontier Foundation’s AI Progress Measurement Project, which crowdsources the latest objective metrics on machine capabilities from experts.

The only category EFF had that was related to a machine’s speaking and listening ability was “speech recognition.” So at first, I thought that this might be a good measurement of natural language understanding. But then I saw that speech recognition was measured by a program’s ability to accurately transcribe a large database of phone conversation recordings called Switchboard. And I remember transcribing radio interviews for the press team on the Deeds campaign. If I hadn’t paused the recording every 20-30 seconds, I would have made a ton of mistakes. But that didn’t mean I wasn’t understanding every word they were saying. So transcription is a completely different process from understanding.

So what happens when a person hears and understands something? When you listen to someone talk, the sounds they are making trigger thoughts and feelings in your mind, provided that your mind has the semantic memory of the concepts those sounds are supposed to represent, provided that you and the person talking to you speak the same language.

When I thought about understanding that way, it seemed impossible that a machine may be anywhere close to human-level performance at natural language understanding. The process of recognizing a vast three-dimensional galaxy of meaning, where every common and many rare physical and intellectual phenomena in the world are represented, seems like an aspect of general human intelligence. In other words, it seems to me that the ability to truly understand what a human is saying, no matter what she says, would be impossible to create in a machine absent artificial general intelligence.

Clearly, natural language understanding is a thing that some machines can do at least a little bit, and there must be a way to measure it. But since those machines don’t have the semantic memory that underlies a human’s language comprehension, natural language processing must refer to programs with limited expected inputs and available outputs. And that natural language understanding must consist of transcription plus really good analytics that pair whatever a human just said into the microphone with the most correct response.

And if that analysis is correct, it leads to yet another point of confusion: if natural language processing is about a program’s ability to determine an input’s meaning relative to possible outputs, isn’t an assessment of that machine’s natural language processing capability entirely dependent on the range of outputs that are available? Wouldn’t it be like comparing apples and oranges to measure the natural language processing ability of a really complex program with 100 available outputs with that of a simple one with 2 possible outputs?

Or is there a standard of measuring language “understanding” where a machine might truly understand English, even if a given input is irrelevant to the program's available outputs?

Consider a soda machine, which only dispense soda and say, “here is your [soda.name], thank you.” And you say, “Soda machine, I’d like a coconut souffle.” Can that machine “understand” what you said, even if it doesn’t know what a coconut souffle is?

Or is natural language understanding just being able to tell the difference between “give me a fucking coke,” “I’d like a Coke please,” and “I shall have the caffeinated brown sugar water in the bright red can?”

I suspect the latter case hints at an operating definition of natural language understanding. But I’m still confused as to how that can be measured in across applications of varying complexity. 

 

Books, Movies, Podcasts, and TV from July 2017

Books

The Last Lion Winston: Spencer Churchill: Visions of Glory, 1874-1932, by William Manchester: I went on vacation last month, and needed a vacation book. At almost 900 pages, the first volume in Manchester’s biography of the 20th century’s most famous drinker was the perfect choice. In addition to detailing Churchill’s early life, it paints a colorful picture of the British Empire at its ripe apex, and of the years where it was just beginning to spoil. Dripping with delectable anecdotes and interspersed with author’s reflections so articulate that they are basically poetry, it’s so rich and complex that you could pour it into a heavyset glass with a cube of ice. Best enjoyed by a crackling fire (my vacation was to Canada).  

Podcasts

House of Carbs: I’m a Ringer fanboy and I’ve always loved listening to Joe House fulminate about some great meal he had recently in between defiant stands on behalf of the dignity of my beloved Washington Wizards, so I was always going to give House’s new food podcast a shot. It’s pretty silly, but given that House is a finance dude of some kind who only got a podcast because he’s college buddies with Bill Simmons, it’s not half bad. Give it a try if you want to hear famous food people describe their last meal or hear House’s preposterous gluttony lifestyle recommendations. Take the host’s advice and don’t listen hungry.

Movies

Dunkirk: From the moment I heard Chris Nolan was making a movie about Dunkirk, I was all in, and I wasn’t disappointed. If you like movies at all, go see it. Music, cinematography, (lack of) dialogue, acting, it all works together to make you feel the desperation of those men on that beach, and also maybe to reflect on how insane it is that World War II actually happened (please let’s not do that again). I was sitting in the middle and had the bucket of popcorn in my lap, which was a bad thing, because for the first 30 minutes I couldn’t stop stress-eating.

TV

The Crown: It’s the slow TV movement meets prestige drama. I’m only halfway through the first season, but so far, there has not been a single genuinely tense moment. That’s fine with me–it’s a pleasure to watch such expensively produced scenes from postwar Britain and well-acted portrayals of famous people from that time without worrying about the typical death and betrayal.

Game of Thrones: Speaking of which, Thrones is back. And I’m enjoying it, but not nearly as much as I’m enjoying hearing Mallory and Jason talk about it on Binge Mode. It’s really exciting to see major characters meet who haven’t met before and watch the plot convergence that we’ve all waited too long for speed up. But most things that happen aren’t shocking anymore, because the showrunners just don’t have the talent that George RR Martin had for layering rich crusts of foreshadowing while still surprising the shit out of you. By which I mean, every shocking thing that happens, I’m really not that shocked by it, because it’s usually been foreshadowed pretty obviously. (Watch them prove me wrong this week). Also, while I’m still hopeful that the showrunners will answer all the major questions of the ASOIAF universe, I have an aching suspicion that major things are going to be left out – and that some basic facts about the universe or its timeline will be altered from GRRM’s vision. And given that he may never finish the books, that would make me very sad. I wanna know what the deal was with the Doom of Valyria and the Tragedy at Summerhall! And Quaithe, and Asshai by the Shadow! Yeah, wishful thinking.  

How Retail Is Similar To the Travel Industry

The charticle in the New York TImes from a few weeks ago about the impact of e-commerce on retail employment is another concerning signal about the impact of information technology on the economy. As has happened to travel agents, demand for retail workers has begun to shrivel as their industry uses digital tools for information processing and communication that they used to need humans for.

You should read the charticle, but I’ll summarize the main facts. Department store employment is down 25% since 2002, a loss of 448,000 jobs. E-commerce employment has increased by 334%, adding only 178,000 jobs. Since the overall US population has grown 12.5% since 2002, you’d expect retail employment to have grown at least a little bit. But even with warehouse clubs adding over 800,000 jobs, an 80% increase, overall retail employment has been flat. While e-commerce drives a little over 9% of retail output, it employs barely 2% of all retail workers (see chart from NYT above).

In the 1980s, if you wanted to make sure that a large number of consumers knew about your products and felt comfortable buying them, you needed to build a network of retail locations and staff them with people to answer basic questions. You also needed a physical presence to even let consumers know your product existed and to draw them in; that’s why stores have sidewalk-facing windows to display their products. Today, we have the internet to convey basic product information, answer FAQs, and draw you into the buying area with enticing graphics and animations.

Pre-internet, you didn’t just go to the store to acquire goods themselves; you went to the store to find out what goods you wanted to buy. Separating these may seem like splitting hairs – the way we use English to say, “I’m going to the store,” we make it seem like gathering information about the products that are available and actually bringing the physical product into your possession are all part of one smooth motion.

But the distinction of those two aspects of going shopping has allowed e-commerce to thrive. The first aspect, where a consumer learns about available products, is simply an exchange of information. You don’t need stores to do that anymore. So as new businesses sell more of their products online and take market share from department stores, malls, and other retailers dependant on physical locations, the industry is going to need fewer and fewer people.

According to the charticle, e-commerce still makes up only 8.4% of all retail sales. That seemed remarkably low to me given how much me and my friends rely on Amazon for basic purchases, and that number is sure to rise as consumer behaviors continue their slow transformation. As this happens, I’m betting that retail employment will begin to see an overall drop, rather than holding flat as it has since 2002.

Brick-and-mortar won’t go away completely; there are a lot of products that sell better offline. And traditional retail’s struggle against Amazon is not news. But I wanted to describe how what’s happening in retail is similar to what’s happening to travel agents and other industries. We tend to consider challenges specific industries face in isolation, when we should be talking about the holistic challenge that the internet presents to labor markets, and plan for a time when humans won’t get paid for routine information processing.

I frequently see people conflate potential long term risks of human-level machine intelligence with reports about large-scale automation of jobs into a single generalized fear about AI. On the one hand, I want to help people calm down a little bit; despite advances like AlphaGo, artificial general intelligence is probably many years away.

On the other hand, retail shows that computers can disrupt labor even before any reinforcement learning solutions come to market.

Books, movies, and podcasts from June 2017

This week I’m going to try something new and write about my media consumption from last month. I’ve enjoyed doing this kind of reflection at the end of every year, so why not do it every now and then? Here are my thoughts on some of the new things I watched, read, and listened to in June 2017. 

Books

Mastery by Robert Greene: An OK book of stories about geniuses in various fields and what we can learn from them. But Robert Greene’s thesis — a framework drawn from these stories for how anyone can become a genius — was not credible. His writing was high on flowery language and low on evidence, and the connections he drew were superficial. That’s not to say I didn’t enjoy it. Reading about Einstein and Da Vinci and a lot more people that I hadn’t heard of before was interesting.  And it’s not like I had high expectations; Greene is not a historian or a psychologist or any kind of expert that I can tell; his only background is in publishing books premised on simplifying the pursuit of things that everyone wants but that are hard to get — his other titles include The Art of Seduction and The 48 Laws of Power. But more entertaining than illuminating and you can read it in a couple days. 

Movies

The Big Sick: I was excited to see Kumail Nanjiani, and he was funny, though still not as funny as he is playing Dinesh in Silicon Valley. And after more than ten years, I’m getting a little tired of Judd Apatow-style romantic comedies. There wasn’t anything wrong with this movie — I just felt like I’d seen all its jokes and romantic beats before. 

Wonder Woman: I’ve been telling everyone I’m out on superhero movies for a while now but I made an exception because the internet was really buzzing about this. And I really enjoyed it. Chris Pine was hilarious and I’ll admit to getting really hype in the part where Wonder Woman goes over the top of the trenches in that World War I battle scene. 

Kobe Bryant’s Muse: I knew that this was Kobe’s self-produced documentary for Showtime and would only be his side of the story of his career. But it was extremely well-made, self aware, and inspiring look inside the mind of a truly insane competitor. I definitely came away with a new level of respect for Kobe after watching this and if you care about the NBA, I highly recommend it. 

Podcasts

Binge Mode: Binge Mode, where The Ringer’s two biggest A Song of Ice and Fire nerds recorded 40 minutes of discussion for each of all 60 episodes of Game of Thrones, is podcast heaven. I’ve re-watched all of GoT, in order, so many times that I get no pleasure from going back through old episodes anymore. But hearing Jason and Mallory laugh and debate and do really hilarious character impersonations allowed me to experience the whole series yet again from a new perspective. I’ll always be thankful to Binge Mode for allowing me to wring yet more water out of the ASOIAF towel, and for giving me something to do to feed the beast other than watching janky fan theory videos on YouTube. 

Conversations with Tyler – Ben Sasse interviewed by Tyler Cowen: A non-political interview covering big-picture topics, so a perfect fit for me at this point in my life. Highlights: the origins of the Reagan Revolution, and how the rightward shift of the national dialogue that took place in the 1970s might have been about religious paranoia as much as a backlash agains the Civil Rights Movement; and the crisis in national loneliness, due to the fact that the average American has half as many close friends as she did 25 years ago. Most exciting for me was the way Sasse talked about the transition to an information-based economy as the root cause of our economic, cultural, and political problems, and how neither party is thinking enough about the long-term future, which is like, something I’ve been trying to figure out how to say for a long time. It was heartening to hear that from someone else, even if he plays for the wrong team. 

What the Decline of Travel Agents Could Mean for the Rest of Us

When I was eight, my parents took us to Greece, and used a travel agent to book all flights, hotels, and in-country transport. Three years later, when we went to Italy, my brother and father used the internet to plan the trip themselves. We rarely used a travel agent again.

So when I think about jobs that computers are making obsolete, travel agents is one of the first that comes to mind. And the Occupational Employment Survey proves my intuition correct: at the beginning of the dot com bubble, there were 111,130 travel agents in the US. In 2016, there were only 68,680, a decline of 38%. Yet according to Osborne & Frey’s study of automation (the one I tried to poke a bunch of holes in last week), travel agents have a less than ten percent chance of becoming computerizable. How could this be, given how perfectly the collapse in travel agents since 1999 coincides with the rise of online travel booking services?

Source: Bureau of Labor Statistics and Statista.com 

Source: Bureau of Labor Statistics and Statista.com 

The discrepancy between the employment numbers and Osborne & Frey’s analysis shows me that I’ve been thinking about the possibility of automation the wrong way: machines’ threat to human labor isn’t one of complete substitution. Instead, it’s one where even in industries where humans perform activities that machines can’t replicate, use of technology for information processing make those industries employ fewer people.

In the O*NET survey travel agents receive fairly high marks for originality, social perceptiveness, negotiation, and persuasion. And these are indeed the specialties of travel agents who are still working. If you’re willing to pay, you can have professional design your dream vacation. The ability to find you the bed and breakfast with the perfect atmosphere without you spelling out every detail, or use a personal connection to get you a guided museum tour outside of opening hours, cannot be replaced by machine intelligence, and won’t be in the immediate future. But the thousands of travel agents whose primary functions were storing, processing, and communicating publicly available information have been replaced by technology. This intra-occupational dynamic introduces two questions with huge implications for the economy:

1) Does specializing in originality, social perceptiveness, negotiation, and persuasion scale?

In other words, if all the travel agents who were replaced by Expedia invested time in discovering unique and off-the-beaten-path travel destinations and experiences and offered greater personal service to their clients, could they have preserved their jobs?

If the answer is yes, harm from automation will be easier for society to overcome. Culturally, we expect to be able to earn a living doing things that are not cognitively challenging: call this person, go find this information, read this memo, let me know when this thing happens, etc. Just don’t be an idiot, learn the rules, and keep your job. If we can shift that expectation, so that people understand that work in the 21st century means going the extra mile when it comes to networking and being creative, and change what we teach in school to fit that reality, maybe travel agents can stage a comeback.

It may be that specialization doesn’t scale. After all, the economy has only so many people who have the resources to pay for personalized travel agent service. Perhaps unemployed travel agents trying to get back into the business could only do so by displacing others. If this is the case...

2) Will job growth in other industries balance out the losses?

Perhaps for every occupation like travel agents, there will be bank tellers. Based on the prevalence of ATMs, it makes sense to assume there would be many fewer bank tellers today than in 1999. There’s actually over 43,000 more. Increased economic growth and the lower labor cost of opening a new branch incentivized commercial banks to expand, leading to more tellers overall, despite fewer per branch. ATMs also allowed bank employees to spend more time on socially-intelligent, value-creating activities like engaging with small businesses. Bank tellers are the model case study for automation optimists.

But the bank teller example doesn’t put me at ease. Even if we don’t experience high unemployment, we still see greater returns to capital at the expense of labor, as we have seen for the last forty years. The travel industry isn’t the only one whose need for humans has plummeted as algorithm-driven platforms have boosted efficiency and captured all of the gains. I highly suspect that this trend is a big contributor to stagnant wages and historically-low labor force participation.

And if job growth in other industries can’t balance out the loss of travel agents and others like them, we will have a real problem on our hands. That’s where we’ll need to start considering much more radical forms of redistribution in order to preserve any semblance of social equality.

I’m still undecided as to whether automation will lead to sky-high unemployment, but the experience of travel agents does not bode well. We absolutely need to start emphasizing creativity and social intelligence in education, training, and job performance. And we should also start thinking of a Plan B in case that’s not enough.

47% of Jobs at High Risk of Automation? 3 Reasons to Not Freak Out About That Study

Are robots actually going to take all of our jobs? We don’t know. But a lot of smart people are convinced it’s going to happen.

For evidence, many of them point to the 2013 study by Michael Osborne and Carl Benedikt Frey called The Future of Employment, which concluded that 47% of US jobs are at high risk of automation in the near future. This statistic used in many alarming, credible-looking headlines, like this one:

 

Or this one:

These headlines are misleading. The Future of Employment is a well thought out analysis of which jobs might be susceptible to automation from a technical standpoint. But that’s not the same thing as saying that any particular job will be automated. And this study isn’t meant as a prediction of the future. Let’s unpack their methodology and examine a few features of it that illustrate why.

The methodology: The entire study is based on a federally-maintained database of US occupations called the O*NET survey. Taken sporadically every few years, this survey asks workers in every occupation to rate how good they have to be at over a hundred different skills, abilities, work activities, and areas of knowledge in order to be successful at their jobs. These numeric “level” scores form the basis of the authors’ comparison of one job’s computerization potential to another.

The O*NET survey also asks workers to report the relative importance to job performance of the same skills, abilities, activities, and knowledge areas for which they reported the level required to do their work, but the authors left this “importance” metric out of their model.

The authors picked 70 jobs that were fairly representative of the US workforce. Then, in consultation with machine learning experts, for each occupation they asked a question: “Can the tasks of this job be sufficiently specified, conditional on the availability of big data, to be performed by state of the art computer-controlled equipment?” Occupations for which the answer was “yes” were marked with a 1; no’s were given a 0. They then modeled a probability of computerization for 632 other jobs based on correlations between the O*NET scores and the hand-labelled computerizability marks of the 70 representative jobs they chose.

If you’re worried that the study’s 47% conclusion means we’ll have 47% unemployment, here are a few features of this methodology that should put you at ease:

The model used availability of “state of the art” technology as its only standard. Occupations were marked as computerizable if they could be replaced using the most cutting-edge technology available – regardless of how expensive, difficult to use, or socially unacceptable that technology might be. To use this study to predict that 47% of jobs will disappear by 2034, you have to assume that McDonald’s will fire all its workers even if the robots that replace them cost a million dollars each and make sexually suggestive remarks to customers. That won’t happen; McDonald’s will only fire its employees if the tech to replace them can offer a pleasant experience to customers for a low investment. Osborne and Frey didn’t attempt to analyze factors other than technical feasibility. This narrows the scope of their analysis. 

The model left out O*NET’s “importance” scores. If you’re thinking about whether a job can be automated, you have to consider the relative importance of tasks. For example, a barista at a fancy cafe might pour cool designs into the foam of my latte – a skill that machines would have a hard time replicating. But in evaluating whether her job can be automated, I’d give less weight to her pouring acumen and more weight to her ability to quickly produce my coffee. Osborne and Frey had to have gone through this kind of thought process for the 70 occupations they hand-labelled to train their probability model. So I can’t understand why they left out the metrics that would have extended this thought process into their objective analysis. Leaving them out undermines their model’s predictive value.

The paper gives us very little information about Osborne and Frey’s process of hand-labelling the 70 occupations that was the foundation of their probability model. What machine learning experts did they consult about which jobs? How did they consider whether a job was computerizable? What activities associated with work did they consider to be essential job tasks? Each job is complicated enough to have its own study analyzing its potential for automation. We don’t know whether the authors spent 6 months at hand-labelling their training data, or whether they did it in an afternoon. Knowing more about that process would be a big help to understanding how seriously to take their conclusions.

A quick glance at Osborne and Frey’s list of occupations shows how the above issues may have affected their conclusions. For example: bank tellers, which the model assigned a probability of becoming computerizable of 98%. Since ATMs have existed for 30 years, that probability makes sense. But more bank tellers are employed today than in the 1980s. Even if cheap alternatives to a job exist, it doesn’t necessarily mean that job will disappear. The same goes for umpires and sports referees, which the model also gave a probability of 98%. Nobody seriously talks about getting rid of them, despite the fact that every baseball broadcast checks an umpire’s called strike three against a computer simulation.

Don’t get me wrong: automation does pose a potential threat. The Future of Employment is a fascinating landscape of the US economy, and is extremely helpful towards understanding that threat by answering the question, “what percentage of US jobs have a skill performance level requirement that current machine learning experts say can be automated in the next 20 years?” Reading the paper, you really get a sense of what skills, abilities, and areas of knowledge we should cultivate in our workforce to ensure widespread prosperity and social equality. But don’t freak out: the paper doesn’t mean a future of high unemployment is certain. Such a prediction requires a much wider and deeper analysis.

Transitioning To a Non-Routine Economy: It's Not Happening, But It Has To

Ask an engineer how AI will impact the economy, you’ll get a scary answer. The robots are going to take all our jobs. Technology will develop too fast for human skills to catch up. In a couple of decades, unemployment will be at least 20%.

Ask an economist, and you’ll get a more measured response. Technology has replaced jobs before, they say. But the economy always creates even more new jobs to fill the void. If technology has any ill effect, it’s in the form of what economists call skills-biased technical change: where most wealth accrues to the people whose skills are complemented by technology, leaving behind everyone for whose skills technology is a substitute.

Look closely at both of those answers, they both have the same big implication for society: to live a comfortable life in the future, you’ll have to perform non-routine work that machines can’t do. Social equality will depend on all humans using the creative, social, strategic, or keenly perceptive and manipulative parts of their brains to do something of economic value. On this, even deep learning engineers with Singularity countdown clocks can agree with white-haired economists who don’t use email.

Civilization has never conditioned most humans to view work as a creative or strategic endeavor. Ever since the best strategy for survival became to grow crops, people have done more or less the same thing over and over again every day. If they were lucky enough to avoid misfortune at the hands of Mother Nature or the God of War, they could derive happiness from family, children, and the possibility of an afterlife. Industrialization only reinforced the repetitive nature of work for most people. Transitioning to an economy where every single worker does non-routine work, using the most human parts of their brains, would be unprecedented.

As an optimist (or maybe a sucker), I figured I would try to see if this transition wasn’t already happening. So I assembled a dataset that would let me view the share of the workforce comprised of occupations whose required tasks, skills, knowledge, or abilities are highly creative, social, or keenly perceptive/manipulative for each year going back to 1999. The Bureau of Labor Statistics website isn’t awesome, and I wish standard formats for federal data releases were a thing, but after 12 hours of repeated INDEX/MATCHes, I finally had the numbers I was looking for.

(I did this by building on the infamous 2013 study by Oxford scholars Osborne and (not Walder) Frey that concluded 47% of US jobs were at high risk of automation in the next 20 years. I won’t bore you with the details of my methodology unless you are curious, in which case, yes, I would love for you to read about how I spent my weekend).

And, sadly, the needed transition to a non-routine economy is not underway. The share of jobs scored highly for the level of creativity, social intelligence, or powers of perception/manipulation has held steady at around 24% since 1999. And because I’m obsessed with the concept of creativity as a panacea for all of society’s political, spiritual, and economic problems, I looked at a subset made up of the 90 jobs that scored highly for originality and fine arts knowledge. Alas, these jobs have held steady at 6.2% of the workforce since 1999.

Based on these trends, the outlook for social equality in the US is not awesome. What kind of changes do we need for the transition to a human-centric economy to take place?

First, the trends I looked for in the data and failed to find could materialize: the non-routine jobs I was looking at could start to become a larger part of the economy. Let’s say that advances in natural language processing put all paralegals out of work. That would free up capital for more massage therapists and yoga instructors. Enough new massage therapists and yoga instructors might enter the market, leading to lower wages, and suddenly everyone in the economy can afford their services. So we maintain full employment, and a population that is more flexible, mindful, and well-massaged than ever before.

I don’t think that specific scenario is likely. But stuff like that could happen at the margins. Also, I wish to God the BLS had had occupation-level employment data from before 1999; it would be great to see if a such a shift away from routine work – perhaps resulting from some past wave of automation – has ever taken place before.

Anyway, the yoga/massage utopia scenario doesn’t even consider the possible creation of entirely new categories of work – the economist’s trump card. It’s a historically and logically sound argument to say that automation technology can unleash massive latent demand for new services.

Or there is another scenario: occupations could become more creative, social, or perceptive/manipulative, as people performing them use the extra time they gain from automating routine tasks to find new creative, social, or perceptive/manipulative ways of increasing their output.

This could already be happening. Businesses and other organizations have been using software to automate routine tasks and cut costs for decades. Unless employees of those businesses have been using time saved from automation for office naps, they will have already been augmenting the creative, strategic, or social aspects of their job performance. Anecdotally speaking, this has been happening in the legal industry, where firms' hourly rates have skyrocketed over the last twenty years, as the number of lawyers required to work on projects has decreased due to legal software diffusion.

And if this trend were happening across the economy, my analysis wouldn’t have measured it. Though I looked at employment levels for each occupation going back to 1999, the scores I used to measure the creative, social, or perceptive/manipulative skills required for job performance were based on federal surveys from 2013. The BLS could have updated these job characteristic scores to reclassify previously routine occupations as more human.

Such a trend WOULD show up in a rising productivity rate, which measures output per hour of work. Alas, productivity has been flat since 2004 – one of the most troubling mysteries in the economy today.

Many people I’ve talked to presume that the economy can sustain only so many creative, social, or perceptive/manipulative occupations, and that the bulk of the population must necessarily be made up of people doing routine work, or no work at all. I disagree. I think this is a transition that can take place — it’s just a big shift in how people think.

Scientifically speaking, every human being has a strategic brain, the ability to socialize, and the ability originate ideas. As routine tasks are automated this century, social stability will depend on us making each person’s latent ingenuity economically valuable. Too bad this isn’t already happening that I can see.

 

How Reading About Automation Made Me Sympathize With Anti-Intellectualism

Last week, I was listening to Larry Wilmore interview Neil DeGrasse Tyson. Building on a condemnation of people who think comets are a sign of the apocalypse, America’s poppiest astrophysicist expanded the scope of criticism. “Huge swaths of the public are embarrassingly scientifically illiterate,” he said, with venom in his voice.

This is a common lament among us liberals, when we’re down on our luck, searching for reasons why our leaders are as stupid as they evidently are. Like some have been said to cling to guns and religion, many urbanites cling to the idea that most Americans are dumb, and that distrust of scientists and distaste for intellectualism is the chief expression of said dumbness. Around once a month, I see an article in my news feed from Slate or some other organ of liberal anxiety bemoaning anti-intellectualism. The writers, privileged though they are to work in a creative profession, brim with incredulity that some people don’t implicitly trust scientists and others with high degrees.

I sympathize with the urge to disdain. I grew up in Northwest DC surrounded by journalists, academics, and other people who exercised their intellect for a living. I was shocked in 2000 when Americans elected the candidate who had a limited vocabulary and constructed sentences that made no grammatical sense. The fact that there has ever been a “debate” about whether climate change is man-made has always been infuriating. Why don’t Americans seem to value intellect in the same way everyone I grew up with did? 

But anti-intellectualism is a part of our society’s DNA, built into the kind of activities that our economy has always demanded that most workers perform. I started thinking about this as I’ve done research on AI and tried to figure out whether robots are going to take all of our jobs. And I’ve developed a little sympathy for people with anti-intellectual feelings.

The most infamous study of whether robots will take all our jobs concluded that 47% of US occupations were at “high risk” of automation in the next twenty years. I don’t think that figure is realistic, but that’s a debate for another time. What’s most interesting to me about the study was the way the researchers (Osborne & Frey) classified different work activities: occupations made up of routine, repetitive activities were deemed at high risk of automation, while occupations comprising activities of creation, perception, and persuasion were deemed very unlikely to be automated anytime soon.

I’ll certainly be worried if the possibility of 47% of jobs really disappearing in the near future comes true. But I’m also worried about the certainty that 47% of people have had no choice for earning a living but to perform repetitive tasks all day, every day. People in creative, perceptive, and persuasive jobs get to solve original problems and bring things into the world out of their minds that didn’t exist before. People whose jobs are made up of nothing more than repetition use those parts of their brains much less frequently.

Our economy has always needed a small number of people to use the intellectual parts of their brains, and a much larger group of people to perform the same tasks over and over; there’s only so much room at the top. Our schools and other tools of socialization haven’t conditioned most people to expect anything else. Neil DeGrasse Tyson said it himself, just a few minutes before his invective against the scientifically illiterate: “You spend the first two to three years of your childhood learning to stand up and walk around. Then you spend the rest of your childhood being taught to sit down and shut up.”

If you grew up with that experience and ended up in a repetitive job, it would make sense if you came to resent the privileged few who enjoy daily cognitive challenge and satisfaction. And if that privileged few called you stupid, might it make you feel like voting for the most evidently stupid person to run for office in our history out of spite?

Rather than bemoan the lack of respect for intellectualism on the part of some citizens, we’d be better off changing the underlying conditions that make them that way. I don’t know if an all-creative/perceptive/persuasive economy is possible; it certainly hasn’t ever been before. But it’s definitely more possible today than it ever has been in the past. To the extent that we can get more people into jobs where they exercise their creativity, we’ll forestall any man vs. machine employment crisis. As a bonus, maybe we’ll reduce anti-intellectualism, and keep anti-intellectuals out of the White House.

Stuff Government Does: December 2015

 

Happy holidays, fellow optimists! I hope that all of you have enjoyed eating, drinking, arguing with family, and opening presents this week. Despite the shortened work month, our governments did not sit out the ritual of giving, and have left us with gifts in the form of of progressive achievements. Let’s rip through the carefully folded wrapping paper and see what’s inside.

Military opens combat roles to women: When I was a little kid, adults told me that women could do any job that men could do. So when I learned that only men could serve in military combat roles, I wondered what the deal was. Nobody had an explanation that made sense to me.

Fortunately, on December 3rd Defense Secretary Ashton Carter announced that each of the armed services would have 30 days to submit plans for opening up to women the 10% of combat roles that remain male-only. This includes the Navy SEALS and the Marine Corps combat infantry, whose commanders had recommended continuing to keep women out of certain jobs, like that of machine gunner.

In the last couple of years, gender roles and the sexism therein have received their first public examination in decades. It’s great to see that translate to changes in government policy that promote gender equality.

NHTSA updates their crash test rating system: The National Highway Traffic Safety Administration is the federal entity responsible for make sure people don’t die in car accidents. As a part of this noble mission, they assign new cars a safety rating of 1 through 5 depending on how the car performs in a crash test. The 1-5 score is a single score that encompasses every aspect of crash safety.

On December 8th, the NHTSA proposed new rules (which will now go through public comment) that will impose a comprehensive scoring system for crash tests that will have individual scores for crash avoidance systems and pedestrian safety— EG, measuring how badly different cars will hurt pedestrians they run into.

This is a pretty obscure change, but it’s always good when public safety institutions update their procedures to reflect changes in technology and a greater range of potential threats. Also, these rules will serve as a more effective framework for certifying the safety of self-driving cars, which technology publications keep telling me are just a few years away.

Virginia suspends concealed-carry agreements with 25 states: In Virginia, if you have a history of stalking, drug dealing, or mental health inpatient treatment, you are not allowed to carry a concealed weapon with you when you leave your home. But these rules haven’t really been enforced in Virginia, because they have agreements with 25 states to honor their more relaxed concealed-carry laws. That meant that if you were a stalker in Florida, and you were permitted to carry a gun around in your jacket, you could carry that gun up to Virginia and not be in violation of any laws, even though if you were a Virginia resident, you would not have been able to get a concealed-carry permit in the first place.

But on December 21st the Attorney General of Virginia Mark Herring announced that he was ending Virginia’s concealed-carry permit reciprocity agreements. Once effective, more people will now leave their guns at home. This is a tiny adjustment relative to what’s needed to significantly reduce gun violence in this country, but a positive step nonetheless.

Happy Mix of the Month

[soundcloud url="https://api.soundcloud.com/tracks/239079557" params="auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&visual=true" width="100%" height="450" iframe="true" /]

Ending the EITC marriage penalty: The Earned Income Tax Credit is a wage subsidy for working adults with children that is credited among policy wonks as the federal government’s most effective tool at fighting poverty and encouraging employment. For technical reasons, for decades following its enactment under the Gerald Ford administration, EITC benefits were reduced for couples with children if the parents got married. This “marriage penalty” created terrible incentives.

Fortunately, the glitch was fixed on a temporary basis in the stimulus package in 2009, and was fixed permanently by the recent omnibus spending bill, a deficit-growing monstrosity that nonetheless included a couple of policy gems like this.

Compensation for Iran hostages: The 44 American diplomats who were held hostage in Iran in 1979 were subjected to multiple mock executions and other abuse over their 444 days in captivity. It also heralded the arrival of a brutal new geopolitical era, where diplomats, previously considered off-limits, were no longer safe.

Under the arrangement made in 1981 to secure their release, the hostages were prevented from taking any legal action or seeking compensation from the state of Iran. But as part of the omnibus spending bill, the US government is finally compensating the victims, to the tune of $4.4 million each. The cash will come from a $9 billion penalty paid by a French bank who violated financial sanctions against Iran a couple of years ago.

This isn’t a huge progressive step, but helps right a historical wrong, and is a sign that at least in some cases, the federal government looks after its own.

VAN-gate: A Former Organizer's Explanation

 

The data breach dispute consuming the Democratic Party today is pretty confusing. According to the Washington Post’s initial reporting on this late Thursday night, the Sanders campaign admitted to “improperly accessing confidential voter information gathered by the rival campaign of Hillary Clinton” in VAN, the software most Democratic campaigns use to keep track of voters. VAN is the central information organ of a campaign, its version of a CRM or EHR system. A campaign without VAN is a fleet without radar.

The DNC responded to the breach by cutting off the Sanders campaign’s access to VAN, a catastrophic blow. By Friday afternoon, Sanders had sued the DNC for breach of contract and played down its own culpability in the situation, placing primary blame on NGP, the vendor that maintains VAN.

Did Sanders staffers actually steal data from Hillary’s campaign? Or did some of Hillary’s data just pop up on their computer screens without them realizing what it was? Amid the fighting words and news stories written by people who have no idea what campaign staffers actually do all day, it’s hard to tell where on that spectrum their actions lie. But I was once a campaign staffer that used VAN all day, and I’m pretty sure I know what exactly happened.

After this story broke, NGP, the maker of VAN, issued a statement saying that "for voters that a [Sanders staffer] already had access to, that user was able to search by and view (but not export or save or act on) some attributes that came from another campaign.” This statement points to a distinction between two databases within VAN: “My Voters,” and “My Campaign.”

My Voters is the general voter file that both campaigns have access to. But once you identify voters you want to keep track of, because they are targets for turnout efforts, likely volunteers, or something else, you import their records into My Campaign, a database that only your campaign has access to, to keep track of your contact history with those voters, and to save sets of search attributes or static lists for strategic purposes — for example, if you wanted to save a dynamic search of people whose contact history has attributes that make them particularly weak supporters.

When you’re logged into VAN, you can access both databases, but there is a firewall between them, so you can only be working out of one or the other at any given time. If you’re in My Voters, the interface is colored in shades of blue; in My Campaign, the interface is tan colored. Data does not copy back and forth when you update the same voter's profile in one of the other.

Let’s imagine there is registered Democrat in New Hampshire named George W. Boehner. George is 55 years old and lives at 123 Brown Street. This year, George has been called by two Hillary volunteers, and had one come to his door. In the first two calls, he said he really liked Hillary but that he wasn’t paying much attention to the election yet, and the callers noted this information in VAN. When a canvasser came to his door a few months later, he said that he was now leaning towards Hillary, and would probably vote for her; however, he also said that he was now paying close attention to the election, and was intrigued by what Sanders was saying about income inequality.

George's demographic information lives in My Voters. Both Hillary and Bernie’s campaigns can see that he is a registered Democrat who voted in 2012, is 55, and lives at 123 Brown Street. But the information about his voting preferences and issues he cares about live in My Campaign. Only Hillary should be able to see that information, because if the Sanders campaign sees it, they would know that he is a soft Hillary supporter and potentially could be persuaded to vote for Bernie.

NGP’s above statement is true — Dick Boehner is a voter profile that both campaigns already had access to. However, the “some attributes that came from another campaign” likely constitutes the critical information contained in My Campaign. So I think Sanders staffers probably found themselves with access to some of Hillary’s campaign’s saved searches in My Campaigns. These would have been so obviously labelled that Sanders’ team should have immediately known not to click on them, let alone save them as static lists in their own personal folders, as they apparently did. So while I don’t think Team Sanders committed outright theft, they are clearly at fault.

The DNC acted out of turn by immediately revoking Sanders’ access to the voter file — according to their contract, the Sanders campaign should have had 10 days to correct any breach before having its access revoked. Debbie Wasserman-Schulz wanted to give Sanders a public humiliation, and ended up embarrassing herself.

But the statements the Sanders team has given trying to place the blame entirely on NGP — as if the data being made available to them means that they had a right to look at it —are ridiculous. If you find a duffle bag of money on your front porch that you know isn’t yours and you turn the money in, you haven’t done anything wrong. But if you take the money out of the bag and start counting it on your dinner table and thinking about ways that you might spend it, you’re both a complete idiot and ethically culpable. And the campaign’s effort to turn its own crime into a “the Establishment is trying to tear down the little guy” narrative is as cynical as they come. Appropriately so, for a campaign that is trying to win. But for a candidate who portrays himself as the fearless truth-teller, it tastes sour.

I hope that we’ll learn in more detail over the coming days about who clicked where, who told them to do so, and what data they saw, and whether they still have any of it. And I hope that those facts determine the outcome of this street fight.

What John Oliver Doesn't Get About China

 

John Oliver’s segment on public financing of sports stadiums was one of his best this year. Over twenty minutes, he skewered the profitable franchises whose owners threaten to move to another city unless taxpayers build them new stadiums with amenities like swimming pools and fish tanks. But one of his laugh lines about halfway through revealed a touch of Western-centric cultural ignorance that he usually rises above. Watch this part about the Milwaukee Bucks:

[youtube https://www.youtube.com/watch?v=xcwJt4bcnXs?start=787]

Tom Barrett, the Mayor in the clip, is surprised and delighted that a random Chinese person would recognize a Milwaukee Bucks logo, taking it as a sign of Milwaukee’s symbolic relevance to world culture. Oliver is surprised to the point of total disbelief. I can tell you from experience that they are both wrong: China, like the US, is full of basketball fans who know a lot about every pro team.

In Shanghai, images of Dwight Howard and Kobe Bryant cover the sides of skyscrapers. Groups of young would-be ballers wait in long lines for time on packed public courts. When you’re a foreigner in China, taxi drivers will often ask you where you’re from and what you’re doing there. Multiple times when living in Shanghai, after I said I was from Washington, drivers would excitedly ask me if I’d seen Gilbert Arenas play.

There may even be more NBA fans in China than in the US. Of the roughly 250 million adults living in the United States today, 6% claim the NBA as their favorite sport, according to the Harris Poll’s annual survey. That gives us 15 million NBA die-hards. Now let’s triple that number to allow for people like me who have a different favorite sport but still follow the association closely and say that there are 45 million above-casual pro basketball fans in the US. As another point of reference, the 2015 NBA Finals averaged 19.94 viewers per game, the most since Michael Jordan’s last season.

Now consider the following: an estimated 300 million people play basketball recreationally in China; NBA games are routinely broadcast on CCTV5; the NBA’s 2012 Chinese New Year Celebration, taking place over 8 nights in January, averaged 10 million viewers per game. Extrapolating from China's four-times-larger population, it's not farfetched to hypothesize that if there aren't as many basketball fans in China as there are in the US, it's close.

The guy who came up to Mayor Barrett could very well have watched a Bucks game within the previous week. But in the Western popular conception, China is exotic and mysterious. So the Mayor and the late night host are varying degrees of dumbfounded. Like many Americans, they subscribe to a storybook impression of the world’s largest country. 

Our culture perpetuates the stereotypes. Movies set in China are almost always about Kung Fu, gangsters, or the emperor. Well-intentioned journalists overwhelmingly write about things that cast Chinese people as the “other:” oppression of dissidents, environmental degradation, territorial aggression. It’s not that any of these things are factually inaccurate; kung fu is a thing in China, they did used to have an emperor, and they do oppress dissidents. But we only seek to produce and absorb content that reinforces our worldview.

It’s a form of American exceptionalism we’re all a little guilty of: by pointing out how strange other places are, we feel more settled about ourselves. I can’t even tell you how many times people have asked me, usually with a grin, about whether I ever ate dogs, or saw people eat dog meat, in China (in three years, I never once did). If Mayor Barrett had realized that the NBA is a global brand that people all over the world follow closely, and that a Chinese person having heard of his city's team is not noteworthy, then that incident wouldn’t have made him feel as proud Milwaukee as it clearly did. If we think about how the movies we watch, the food we eat, the music we listen to, and the sports we follow arejust as normal to people we think of as exotic as they are to us, then we feel less reassuringly sophisticated. The complexity of globalization makes thinking about the world more difficult. But we’d be better global citizens if we made the effort, instead of chuckling to ourselves about foreigners and their ways. 

John Oliver is usually so good at deconstructing his audience’s cultural ignorance; this time, he fell prey to it. 

Stuff Government Does: November 2015

Greetings, friends. I think you’ll agree that November felt like it was a particularly messed up month of news. It’s been many years since an event poured as much sadness fear and upon us as did ISIS' assault on the 11th arrondissement. But if it feels to you like the world is getting more unstable and more unsafe, you should be reassured to know that rates of almost every category of violence have been collapsing over the centuries, and even in the last thirty years. Consider these data points: before the rise of the nation-state, battles killed 500 of every 100,000 people. In the 19th century, it was 70; in the 20th century, it was 60; and as of 2011, it was down to three-tenths of a person per 100,000. Your life and limb are irrefutably safer from harm than those of your ancestors.

Hold that fact in your mind while we jump into November’s edition of Stuff Government Does: your monthly digest of the reasons to believe that government can do things well.

Department of Justice crackdown on fraudulent nutritional supplements: Have you ever been sprawled on your couch at 1 AM, eyes half open, and been suddenly shocked from your torpor by a commercial, its volume 10 clicks above the Seinfeld rerun you were watching, about a dietary supplement promising some impossible benefit for only $19.99?

You’re not the only one. And it’s not just about obnoxious advertising. Some of these supplements have dangerous side effects, and are linked to at least 2,000 hospitalizations each year, according to Vox. Fortunately, on November 17th the Department of Justice filed criminal charges against over 100 makers or marketers of bunk dietary supplements. Six executives were arrested as the result of a yearlong federal investigation. It’s good to see DoJ picking up the slack for the FDA’s under-regulation of the quack medicine racket.

High speed rail is coming to Vegas: High speed rail is one of the coolest things ever. To be able to zip down to LA in two and a half hours would be life-changing. Where built, it would redefine what we consider a “metropolitan area” and vastly expand the distance people could consider traveling when looking for a job, a school, or a home. With such increased mobility, it would exponentially raise the ceiling of America’s economic performance. I could even take day trips to LA.

If the history of high speed rail in the US were a sapling, it would be dried out and falling over. After the stimulus bill provided money for high-speed rail in four or five major population zones, Republican killjoys in all but one of them gave the money back. But the upside to high speed rail is so high that I get excited about even the tiniest of green shoots.

So I’m happy to report that last month, Nevada’s high speed rail commission has selected a vendor to build the line on which people will travel from Southern California to Las Vegas in about eighty minutes. Construction is due to begin in 2016.

There’s a caveat: the initial phase will only run from Vegas to Victorville, California. If you’ve never heard of Victorville, that’s because it’s in the desert about an hour and twenty minutes’ drive northwest of downtown LA. I know, it’s a major bummer. But like I said, it’s progress — and once the line to Victorville is compete, the same vendor will begging work on an extension to connect with the planned California high speed rail line that is under construction right now.

Happy Mix of the Month: Each month I'll share a studio mix or live set I really like, because reading about progressive optimism and listening to electronic music go hand in hand, obviously. More things government did below!

[soundcloud url="https://api.soundcloud.com/tracks/234357406" params="auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&visual=true" width="100%" height="450" iframe="true" /]

 

Voting rights in Kentucky: I’ve come to realize that in American politics, the more people turn out to vote, the higher quality of governance we will have. So it’s a net positive that on November 24th, Kentucky’s outgoing governor, Steve Beshear, signed an executive order that will allow most convicted felons to vote once they complete their sentences. This means that almost 100,000 disenfranchised people will soon get the right to vote. It’s also a baby step towards improving our ability to reincorporate convicted felons into society.

Taco Bell will humanely source their eggs: Occasionally, a corporate action makes it into this blog series. To qualify, the corporation in question has to be large enough for its action to constitute a significant advance for the public in that issue area. Taco Bell makes the cut this month with their announcement that by the end of 2016, they will serve only cage-free eggs.

I never used to care where my food came from, and would get annoyed at people who tried to guilt trip me with documentaries about suffering chickens. But then I read Rolling Stone’s really well-researched, visually horrifying pig farm expose two years ago, and I realized it wouldn’t kill me to look for animal-based food options that didn’t come from meat factories where animals undergo torture for their entire lives. It’s great to see one of the largest and most symbolically important mass food providers come around to the same conclusion.