Parasite feels American to me (sort of spoilers)

The rich people’s house in Parasite has a son, self-confidently lost in imagination, running, jumping, whooping in costume shooting arrows throughout the house. His mother, whose wealth allows her subconscious to be consumed only with the world-drenching anxiety and what-if game that having children brings (so they tell me). Her voice ringing in faux agitation, réel adulation as she calls for him to settle down, clearly celebrating behavior she feels is all the healthy glow of genius, that she is of course responsible for bringing into the world. The rapid, fragile nods as she agrees to pay for four weekly sessions of art therapy. Looking at how she hoped to examine and manage her child’s little brain down to the last neuron made me feel like I was looking at America, at myself. 

It was the same with the low-volume, deadpan derision of the looks on the faces and sounds of the voices of the poor brother and sister as they tore into their caper, the performance within the performance. The deliciousness they felt, and projected into the audience, as they engaged in cold manipulation and criminality. As the whole family lay on the wrong couch delighting in their raw, exquisite juicing of wealthy obliviousness, so did we. Quiet, knowing schadenfreude — sadism?– struck a chord with a very cynical, minor key, roll-your-eyes realness and irony in response to injustice and inequality that I see all the time in Americans, three generations after the onset of the age of the suburbs and the Salk vaccine, who have nothing left to feel exuberant about. 

The other Korean film I saw in the last year, Burning, had similar themes, rich vs. poor. But not in ways that fit so satisfyingly with the objects and tone of American dissatisfaction with inequality; less a sense that they are looking at the same things and feeling the same way about them. The clean condo and fancy group dinners and Italian wine rang true; the over-the-top sports car and the serial killer insinuations did not. An image of wealth and inequality that was powerfully symbolic but a little exaggerated, cartoonish. It made for terrific drama but didn’t connect as much in an earthy, we-touch-the-same-objects-and-rad-the-same-Slate-articles sort of way as the big house, the sacred children’s birthday ritual, the aborted camping trip, and the understated drip of sarcastic resentment that they are asleep to. And while Burning is (obviously) not an offensively stereotypical portrayal of Korean culture, it does have plenty of the unfamiliar — the young women performing song and dance in miniskirts outside a grand retail opening; the main character bowing deferentially when he meets the older villain. 

Parasite reflects an emotional response to an unequal world that feels so warm and familiarly American it could have come out of a microwave under a thin film of plastic. I love it: taking Korea and positioning it not across the ocean, but across the street, or even across the living room; showing that we are both not only in the OECD, we also laugh ruefully and pessimistically at our historically plentiful 2019 societies in the same ways. That’s a deep connection indeed. 


So what was it like living in China?

China was a story about travel, cross-cultural friendships, alcohol, and the beginnings of an intellectual love affair with the globalization. But in my memory, what holds together are the sensations. For example: the first thing I felt was stickiness everywhere. A numbing, early summer moisture passed through my skin and my spirit as we walked out of the terminal after that awful first flight, while some lower-order function in my brain’s software guided my body while the real me was spinning in blackness like debris from a space explosion. It stayed with me for months, indoors and out, the warm moisture, oddly muffling the alarms blaring in my amygdala that all was not OK.

Because I really didn’t want to move to China. It’s hard to explain, given how much I now look back on the preceding years with such sobbing pity, but at the time: it didn’t matter that I was in the friend zone with every girl that knew my name, that my erstwhile best friend had never stuck up for me once. It didn’t matter that among my greatest personal points of pride was anticipatory, in having my name painted in gold on the walls of the cafeteria, with all the other graduates, of a school so literally holier-than-thou that it called this cafeteria a “refectory.”

What mattered was that at the time these things were incomprehensible treasures, compared to the dim, grueling procession that had been my school and peer life for most of what I could remember to that point. And that on the other side of a move to China lay an ocean of unreality. The total loss of familiarity and control, which I could only anticipate with total fear.

In the beginning and the end, the unreality of China was made real by sensations I could feel in my body, which taught me things about life I’ll never forget. After humidity, there were smells. There was a signature of urine in the aroma of the moisture that first night at the airport, and in China there were always odors. It would be trash overflowing from a receptacle, or steaming pork in white wrinkled dough, or gray eggs in grotesquely cracked shells, soaking insipidly in bubbling brown liquid by the cash register of every convenience store. Always a rank sweetness in the air which could be revolting or enticing or very bizarrely both.

Some of the most special memories are audiovisual scenes of people and buildings. I’d be standing on the terrace of a favorite Thai restaurant. Before me, glowing tables, then a line, across which was grass, and then trees, all dark but discernible. At the edges of my vision, rounded, twirling gray features of the old colonial building. Behind me, a high-ceilinged room with a square bar big enough to play pickup basketball in, serving brightly colored drinks that were cold to hold on a hot, moist night. All around, people’s laughter warming like fire, rising like smoke up above the treetops to the skyscrapers, a crop of four to six of them, modest by China standards, but downright gaudy for a Nashville or a Miami, the tops of a few bathed in light. Necessarily both wasteful and decorative.

Wasteful and decorative, rank or fragrant, were new sensations. In Washington, everything disruptive, good or bad, was managed, curated, looked after, handled. No sensations were permitted that could knock you over. Sometimes the curation is good, like regular trash pickup. Sometimes it’s disappointing, like never smelling new foods when you walk down a strange street. And sometimes it’s infuriating, like permitting leaf blowers to moo at the sky all afternoon, signaling that what’s important in life is something so clean and routine as green grass unblemished by material from any other plants, discounting all possibilities of beauty or disgust.

China was many things, and it was not this. The skyscrapers were orbs, beacons, signaling to me that life never needed to be dark or boring ever again. I had been chosen to play in a wider playground.

People ask me what it was like to move to China; these are just a few of the things.


An AI Safety Conference Is a Strange Meal

It hit me how much of a workout my cerebral cortex would get this weekend when I boarded the bus and overheard a half American / half Australian accent saying, “What complex numbers are saying is up plus down is different from up minus down. If you treat everything as a wave, waves are not phases.”

It really sank in when, at the opening dinner, a young man with a beard and fedora sat down next to me, stuck his hand out, and said, “Hi, I’m Derek, and I’m a philosopher” (no shade; Derek and I had a delightful chat about the episode of The Good Place where Michael puts Chidi through the trolley problem). I didn’t bother commenting to anyone about Steph Curry saying, “game, blouses” to poor, poor James Harden.

So, yeah. The conference about keeping humans safe from artificial general intelligence (AGI), or human-level AI, is weird. With AGI, you’re dealing with a concept so revolutionary in its logical endpoint that it’s almost mystical. Hardcore AI people can seem cult-like.

But the weirdness in a way that is ultimately very fulfilling, despite all the incomprehensible math. It reminds me of eating local food in remote parts of China. The first time I put ma la jiao in my mouth and felt it numb up, it was a totally alien experience. Yet I loved it; it was both rich and expansive, shining a light on such an unexplored corner of the global flavor matrix made me see all food in a new light. Kind of like how talking about AI safety has made me reflect on this whole project of being human.

Consider something called “inverse reinforcement learning,” or IRL for short. In most AI systems, humans define an objective, then train an algorithm to meet it. With IRL, algorithms observe human behavior and define their own objectives based on their own model for human intent and well-being.

IRL is potentially very useful because humans are terrible at specifying our objectives. Even in extremely narrow contexts where software is useful today, computers often act like complete idiots, and it’s on us to make sure we’re inputting commands on the program’s terms. In complex contexts – say, your living room and kitchen – defining all objectives and their relative priorities for a home-assistance robot would be like reconstructing a sliced onion. It would be much easier to let this robot observe our behavior to know, say, what months of the year and times of day we want what windows open, etc.

The advantage of IRL is magnified infinitely when considering the possibility of AGI or superintelligence. If we build something smarter than us, we need it to act in line with our values. But what are our values? Getting world governments to endorse a comprehensive statement of values is the most hopeless task imaginable. Wouldn’t it be great if a superintelligence could learn our values by watching us interact over time instead?

So I clapped when Stuart Russell talked about this in his keynote. But then I was spooked. This would mean outsourcing all of philosophy to machines. It’s a perversion of humans’ instinct for seeking safety when faced with a threat: to flail for something certain to grip onto.

Instead, we’d intentionally hand over relevant decisions about humanity’s direction to computers, because we can’t trust ourselves. It would be the most basic, panic-inducing admission, the ultimate source of all fearful, sleepless nights: that nothing about our existence is certain or absolute. Or if it is, it’s out of humanity’s dimension to be able to define it. For control-oriented species, it’s a heavy, chalky, bitter pill. But maybe robots can figure it out. And in an unnerving debate, it might be the least unnerving option.

The thing about eating out in western China is after three days, you’re desperate for some good old puffy carbs with cheese on top. Looking forward to moving on from the ma la jiao and enjoying some intellectual pizza when I get home.


Why Great Wine and Great Pitching Are the Same Thing

A couple of months ago, my parents flew out to SF and took me wine tasting in Sonoma County. After a sip of Sangiovese at the second winery we visited, I was stirred by a weird thought, something I’d felt deep in my mind for a long time, but never surfaced. I spoke it to my Mom: “you know, great wine is like great pitching. When you taste something complex and delicious, it’s like watching a Max Scherzer get a called third strike on a nasty breaking pitch to end a tough inning.”

Since my Mom is my Mom, she said, “Hm. What do you mean?” and listened without rolling her eyes or giving me the skeptical squints. But you’re probably thinking, “That’s crazy. Throwing cork and rubber wrapped in leather and sipping overpriced fermented grape juice are completely different categories of things.”

In many ways, they are. But if you truly love wine and love baseball (and maybe if you don’t, though I don’t know what it would feel like to not love those things), watching dominant pitching and drinking great new wine feels exactly the same emotionally. Both begin with anticipation that is both eager and cautious; both deliver sensory experiences that are dumbfoundingly ornate and precise; both end with a mix of excitement at your triumph and awe at the physical mastery you’ve just witnessed.

When great wine has gone down my throat and I’m feeling the sandpapery tannins scrape my tongue, while echoes of the juicy acidity reverberate from the back of my cheeks to the roof of my mouth, somewhere, deep in my brain, there’s a packed baseball stadium erupting into a “HRAHHHHH!” as a pitcher pumps his fist and stalks back to his dugout wearing a mean look. Please allow me to show you, from beginning to end, how that sip of Sonoma Sangiovese was exactly like an unhittable slider thrown for a called strike on a full count.

The Wind Up

The Sangiovese was the third wine in the tasting’s lineup. A rosé and a pinot grigio were the warm-ups. But this winery was best known for its red Italian varietals; the Sangiovese was to be the heavy hitter, the wine that I’d come for, that I’d remember this detour by. I leaned my elbow against the counter and peered intently at the glugs of bloody intoxicant sloshed by the winetress into my glass, like a boy sizing up a Christmas present he can’t quite open because it’s not his turn.

Now imagine, if you will, a warm summer’s evening at Nationals Park in Washington, DC. It’s a nearly-sold out crowd on a Friday evening, and the home team is leading by two runs with two outs in the top of the 7th, but our opponents are threatening; they’ve already scored once this inning, and there are runners on second and third. But then the batter fouls off a pitch to make it 3-2.

The ace steps up to the rubber, and the blended rush and hiss of the crowd gathers and swells to a dull roar. It holds there while he shakes his head once, twice, now three times, before nodding to signal agreement with the catcher on a pitch. Our hurler steps back, steps forward, and brings a leg up high as he can, pulling all his bodily energy into a deep, contorted fold. Anticipation rises in my chest and hangs there, like a breaking wave suspended over a beach by an invisible force. My pitcher has the batter right where he wants him; the outcome isn’t assured, but I’m hungry to see what happens next. Deep in some subconscious canyon’s deepest recess, all of this is happening as I raise the glass of Sangiovese to my lips.

The Delivery

When I take a sip, the Sangiovese lashes my tongue with sweet acidity, cutting a jagged and juicy path from the front to the back. Deep hints of delicious fruit begin reverberations around the front of my mouth that will escalate as the experience unfolds. Back at the stadium in my imagination, the pitcher unleashes the ball, and it bolts forward on a slightly bent path, heading more or less directly towards the batter. For the first fifty feet, it looks like it might drill him.

But great pitches and great wines come in (at least) two acts. When the ball is about ten feet in front of the catcher, its path transforms, sliding way down and to the right. Instead of hitting the batter’s shoulder, it passes him by at a height somewhere between his belt and his knees over the inside part of the plate. To eyes used to seeing things that make physical sense, it seems to obey laws from a different universe. The batter, confused, stands motionless, unsure of what’s happened.

The Sangiovese’s second act likewise confounds physical logic. From its sharp, front-of-mouth beginning, as I swallow its taste blooms like a firework throughout my oral cavity. There are richer, more evolved versions of the tastes I felt on my tongue when the sip began, like the taste of jam compared to fresh fruit. There are textural sensations that remind me of chewing something rough, like crusty bread or dry turkey breast. The reverberations of flavor have now crescendoed into a multi-part choir of mouth-feel. Both pitch and wine began in one place and ended up someplace totally different.

A lot of wines are simple, with round flavors hitting the front and back of your palate all at once, and when you swallow, the experience is over. Sometimes that taste is technically pleasurable, in the way that a 92 mile per hour fastball with no movement piped in over the heart of the plate is technically a strike. Other wines are complex, but taste as good as old coffee with crumbled tree bark. This reminds me of nasty split-finger fastballs that bounce two feet in front of home plate; they’re missing the entire point. Those pitches aren’t getting anybody out, and those wines are for bringing to parties where you don’t know any of the guests. Great wine and great pitching must astonish the senses with elegant precision.

The Aftermath

When I realized the Sangiovese embodied this magic formula, I inadvertently visualized the aftermath of that slider hitting the catcher’s mitt. There’s an instant of disbelief: Did that pitch really catch the inside corner? Is this taste in my mouth real? Then, the umpire punches the air and emits a bloodcurdling shout, and it’s confirmed: the inning is over, the lead is safe, and that wine tasted like nothing I’d ever tasted before.

The batter stands there helplessly for a few seconds before ruefully tossing his bat and helmet and pulling off his gloves, talking to himself as he stares off into space. The pitcher pumps his first and yells as he storms off the mount and strides toward the dugout. And at the winery counter, I bask in the triumph of tasting something truly unique and unexpected, while hearing the tense roar of the crowd crash into a torrent of elation.

So there you have it. The twin experiences of power, precision, and pure triumph that are great pitching and great wine. I hope you have a great summer enjoying both.

 

What Facebook Is Still Missing About Its Role in Society

Mark Zuckerberg’s voice, which I heard for the first time in his interview with Ezra Klein, is humanizing. He demonstrates care for Facebook’s users and the world. He comes across as passionate, reasonable, and kind.

He was knowledgeable about global politics and aware of Facebook’s responsibility to promote healthy societies. He acknowledged that Facebook should have anticipated its negative use cases, that that he’s had a lot of catching up to do since the 2016 election. It sounded like he’s backing up these feelings with money, staff, and managerial time. I came away feeling more at ease about how Facebook will address the global Russian information war against liberalism.

But when Zuckerberg talked about Facebook’s goal of ensuring its users’ time on the platform is “well spent,” my vigorous nodding morphed into incredulous shakes of the head. “Time well spent” is fundamentally at odds with my experience of using Facebook.

The example he gave to illustrate “time well spent”  was the interaction with a long-lost friend. You’re scrolling through your news feed, and you see that someone you sat next to for one class in college and shared laughs with about the professor’s funny voice, or someone from your crew of high school hangout and mischief standbys, throws you a like or a comment for something you posted.

As Zuckerberg says, this is someone you might not be in touch with anymore, but through Facebook, you can see that they’re still doing ok, that they still care about you. I immediately recalled a half-dozen times where this has happened, and I agree: it’s a solid hit of dopamine to have someone you’d mostly forgotten about give you recognition and validation, seemingly out of nowhere. But clicking “like” when something moderately agreeable shows up on your screen during your scrolling ritual is a pretty weak demonstration of care for another person.

And for the handful of times I remember having such an interaction, I’ve had literally thousands of times where I’ve felt momentary discomfort, whipped out my phone without thinking, and started scrolling through Facebook, desperate for one of these little bursts of shallow validation to distract me from confusion, uncertainty, disappointment, impatience, or whatever else I’m feeling while sitting at work or waiting for the bus. A few minutes later I’ll snap out of it, feeling just as uncomfortable as before, only less focused, and probably a little more jealous or angry. There simply aren’t enough long-lost friends to make up for the cost to time and presence of having a procrastination portal that I access literally from muscle memory.

Zuckerberg mentioned that he wants users’ news feed time to be a worthwhile choice compared to whatever else they might be doing, such as watching TV or other leisure activities. But TV requires much more conscious intention and is harder to just get sucked into. You don’t just open up the website and scroll for a few seconds. You have to think about what you want to watch make a decision on a minimum 20 minute commitment. And then, at least you’re getting scripted entertainment made by professionals -- likely a much higher ratio of dopamine to time spent than anything you’ll get on your news feed. It wasn’t great that I didn’t do any homework my junior year of high school because I was watching all 6 seasons of The West Wing on DVD, but at least The West Wing is a great show that has had a positive impact on my life. I’ve literally never regretted watching a TV show as much as any of the thousands of times I wished I hadn’t opened Facebook.

I could be an especially problematic case. I remember times when I’d be scrolling through Facebook, realize that I was just seeing a bunch of posts that I’d already seen before, close the tab, and immediately open up a new tab and go straight back to Facebook without a moment’s thought. But I can’t be the only one. I’d feel a lot better about Zuckerberg’s influence in the world if he was as woke to Facebook’s promotion avoidance and distraction as he was to its importance to the global political environment.

Cambridge Analytica vs. Law Enforcement Heroes: Data Collection Tradeoffs

This quick thought is now so old as to no longer count as topical, but look at these two headlines from a couple of weeks ago:

 

In one case, a social media company assembled detailed personality profiles on hundreds of millions of people through phones in those people’s pockets. It then treated that database with all the gracefulness of college student taking a beer into the shower, abetting the election of Donald Trump (whatever dramatically negative characterizations about him you choose to make, I probably agree with all of them). In another, heroic sheriffs and FBI agents ended a bombing spree by using electronic records made possible by the phone in the perpetrator’s pocket. This is a sharp illustration of the tradeoffs we have to negotiate in the era of data collection.

The decades-long trend is to bring data collection and processing deeper into our lives. With the gains in productivity made likely by the Industrial Internet of Things, with the convenience offered by the recent spray of Google Assistant ads I’ve seen on billboards and heard on podcasts, with the huge financial and research investments in AI systems that require tons of training data to get smart, the main project of humanity for the next while will be to record as much information about ourselves and the world as possible, and send it off to a database somewhere so we can save time or get a little more comfortable.

By the looks of these two headlines, at the end of that process it will be near-impossible for even off-grid terrorists to evade capture. But all that information will be owned by a loose handful of entities, with strange and unsettling consequences for politics and who knows what else.

Is there a happy middle ground? Or is there another possible direction, either through policy or technological developments, that I'm not seeing?

Democrats' UBI Party Foul

The California Democratic Party has made universal basic income (UBI) a plank in its official platform. I’m giddy that the party is willing to champion bold revisions to the social contract that we probably will need. But I’m queasy about actually instituting a UBI in 2018. Here’s why UBI is premature:

Mass unemployment is not happening this decade or next. UBI’s most standout proponents are Elon Musk, Mark Zuckerberg, and Sam Altman. Their followers believe that technology will soon render around half of American workers obsolete. There simply won’t be as many jobs as there are people. Unemployment will shoot upwards even as labor force participation plunges.

Our medium-term economic prognosis is not really so apocalyptic. Like sanguine baseball fans in early spring, technologists are overconfident. Musk and others believe that we will soon create human-level artificial intelligence (known as “artificial general intelligence,” or AGI). AGI could by definition perform any human task. But from what I’ve gathered from recent evidence, AGI is at least a few theoretical leaps and bounds away.

Without AGI, there will remain things humans can do that computers can’t. And UBI proponents have insufficient faith in humans’ ability to assign value to things that weren’t valuable before. Throughout history, this tendency has enabled the creation of entirely new jobs as technology replaces old ones. Growth ensues in job categories that capitalize on things humans can still do. In the coming decades, humans will have the opportunity to perform more jobs -- some new, some very old -- requiring creativity, strategic thinking, social intelligence, and the ability to care for others.

Proposed UBI amounts are really small. Most UBI proposals would allocate something like $1,000 each month to recipients. That’s not even close to a living wage. If you’re worried about the ability of half the population to subsist, how does a citizen salary of $12,000 solve that problem? At the same time, it’s hard to see a larger sum being feasible anytime soon, given that at UBI of $12,000 would cost over $3 trillion.

It’s possible that technology could unleash an exponential increase in wealth creation, which when shared with all people via a much larger UBI would bring about mass abundance. But scenarios this utopian-sounding don’t usually come to pass.  

UBI is a drastic step given stable economic conditions. Unemployment is at 4.1%. Wages seem to be rising. If technology is going to create an economic crisis, it hasn’t happened yet. Can’t we wait until unemployment rises, I don’t know, two points in one year amid productivity increases and without a recession, before we break the link between income and labor?

I’m as anxious as anyone about what a fully platformized economy will mean for most people, and I’m glad that Democrats are willing to contemplate radical measures. In the long run, I’d bet that AGI will be created. Some kind of epoch-shifting system of wealth redistribution may become vital to economic equality. But the economy won’t be fully platformized for a while yet, and I’m not sure UBI is the right system. Let’s let more grounded timelines of economic change inform our immediate policy proposals.

If the White House Had a Cyber Czar...

The Bill Simmons Podcast is one of my most consistent diversions. When I’m making coffee, riding the bus, or doing dishes, there’s no more better escape than free-flowing conversation about NBA trade rumors, GOAT athletes, or sports/pop culture analogies. But his recent episode with Buzzfeed internet writer Charlie Warzel set my intellectual tuning forks abuzz. After talking about fake news, deepfakes pornography, and AI-assisted spearphishing, they turned to our government’s lack of response to the era of fabrication:

SIMMONS: I don’t see anybody in our government, even having the wherewithal or the foresight to do, and I’m not just talking about Trump, I’m talking about anybody, Democrats. Like who is...people always complain about the Republicans and Trump and all this stuff, but it’s not like Democrats have emerged, like these great voices have emerged on that side either. Who are gonna be the voices that fight for this stuff?

WARZEL: I mean, it’s really hard. So that story that we initially talked about, I went to Washington as like the first stop in that story. Didn’t end up using anything, because the meetings that I had with people were just sort of like, “oh, yeah, you know, we’re thinking about it.” You know, there’s no real, I think the State Department just announced there’s like gonna be a cybersecurity initiative thing, but it’s very vague…

SIMMONS: It would seem like the cyber department would be the single most important department the country could have at this point other than homeland security.

I agree, Bill! Let’s make a deal: in the next administration, I’ll run a public campaign for you to be made Sports Czar, like you’ve always wanted. In exchange, you can lean on POTUS to tap me as Cyber Czar. Here are four policy areas that are hurting for debate and leadership:

Security: Cybersecurity remains a mythical concept even among politicians who claim to be national security experts. It’s unfathomable to them that computers could knock America off the superpower pedestal. But the cybersecurity warning signs have been flashing red for a decade. Hackers attaining operational control of our electrical grid? Check. Terabytes of data on the control systems for our most advanced military aircraft stolen by China? Check. Air traffic control systems with worse security than your aunt’s AOL account? You got it. To avoid a cyber 9/11, someone needs to devise and oversee cybersecurity defense for critical infrastructure. Nobody is doing that right now.

Income inequality: Interconnected digital systems are changing the skills valued by the economy. Owners of these digital systems are accumulating disproportionate economic wealth and power. Whether it’s a tax on data, our sudden arrival at a blockchain-enabled promised land of open protocols, or something else, correcting for inequality will mean changing how we’ve set up digital technologies to interact with people. Most politicians either don’t acknowledge inequality is a problem, or think that free college and Medicare-for-all will work on their own.

Privacy: Privacy seems antithetical to the kinds of technologies we’re adopting, the whole point of which is to collect data and send it to some far away computer. The intellectual and legal concepts for keeping information private were built in times when there was a tiny fraction of the amount of information in existence on earth as there is today. Someone needs to lead a conversation about what privacy means when sharing information has become a fact of life.

Culture/Truth: The biggest mindfuck issue of all. As my friend Elliot said to me after my post about fabricated celebrity images, “Either general skepticism in the population goes up (that's the happier ending) or we end up with ever more volatile culture wars as the memes control everything. Who controls the memes? It's kinda scary to think about the fact that we may soon have no strong evidence to lean on as everything can be fabricated.” Other than senatorial chastisements of Facebook about the relatively tiny amount Russians spend on fake news, I don’t see anybody in power talking about this.

One of the things I like about Simmons is how he’ll decide how important something is today by imagining how it’ll be remembered in the future. For example: The Shape of Water shouldn’t have won best picture, he says, because in 10 years the movie we’ll still be talking about the most will be Get Out. No disrespect to people who liked The Shape of Water, but he's probably right. This happens all the time with the Oscars: Dancing with Wolves beating out Goodfellas in 1990 is the canonical case study. 

I think about political issues in the same way. When historians look back on this time, they are going to evaluate us on how we handled the internet, mobile, social media, AI, and all the other forms of data collection and processing we’ve brought into our lives.

Simmons and Warzel are exactly right: our leaders are at best distracted, and in most cases, totally clueless, about the effects of technological change. It’s ironic that the people who care the most in the world about how they’ll be seen by history aren’t paying attention to what’s most historic about this moment.

How To Align Platforms With the Public Interest?

Last week, the ex whose pics I can’t bear to delete from my phone walked into the political gin joint I've taken refuge in since he left last year. At his top-secret remarks at the Sloan Conference, Barack Obama had something to say about platforms and the public interest. Per Recode:

“Former U.S. President Barack Obama isn’t happy with Facebook and Google. They’re no just incredibly profitable tech companies, he said, they are ‘public goods’ with a responsibility to serve the public."

Shocker: Obama sounds right to me. Maybe we’re in an era where to be a giant internet company trafficking in users’ personal information should obligate you to some kind of public responsibility. Facebook didn’t break the rules in 2003 when it built an easy and fun interface for connecting and interacting with people on the internet. But in 2018, maybe the rules have changed. Too much of the population relies on Facebook to maintain emotional connections with people and figure out what’s going on in the world. It’s weird that one company has ownership over people’s ability to do those things, but so long as that’s the case, the company should be a worthy steward. Other companies with similar levels of power over critical aspects of people's lives, such as their income, should bear similar responsibility. 

But none of the items on government’s usual menu for steering private enterprises towards the public interest are all that appetizing. Nationalization feels too much like steel plants in China that have net negative profits going back decades and consume taxpayer money, or like economies where people with gray hair take home fat paychecks and drink really good wine every evening, but people without gray hair have no credible life ambition except to be a civil servant.

Utility style regulation, with its focus on protecting consumers from price-gouging, doesn’t feel relevant. People pivot to anti-trust because it’s the word we use to define government action against companies that seem too big and powerful, feels wrong, because breaking up Facebook and Google into smaller pieces would work only by diminishing the platforms’ utility to consumers we want them to serve better. Ownership by users, the Green Bay Packers model for the future of capitalism, is too radical for me to fully understand its implications, but I wish I knew more about how it's worked in industries other than football. 

I do think that continuing to treat internet platforms like they are no different from fast food chains or O-ring manufacturers is going to lead us to a very unequal, culturally fragmented, insecure place. But I’m not sure the best way to squeeze the public interest into a seat at their table.

Ideas, please?

My Alien Kidnappers' Fake News Barack Obama Video

An update to the onset of the “fabricate everything” era: videos of politicians saying things they never said.

This video has an immediate “wait a second, what’s wrong with this picture?” vibe to it. If my alien kidnappers scanned my memory and determined that the thing I most trust in the universe is Barack Obama’s image and voice, and tried to reproduce it in an effort to brainwash me, but my memory didn’t have all the data they needed about his actual body language when talking, this is what would they would create. The mouth moves, but the rest of the image is still. It’s like a high quality ventriloquism act. Obviously fake.

So the technology is not perfected. Still, how many more election cycles do you think we'll get through before it is? Will we be lucky enough to get through 2024?

Earlier this year Trump claimed that the recording of him boasting about sexual assault, with all the delicacy of a high schooler boasting about having gotten to second base to a transfixed audience of seventh graders, was not actually him talking. It is really beyond the scope of my imagination right now to think about what we'll do when claims like that are credible. And I think we're not that far away.

Can you or anyone else who knows anything about the LSTMs used to produce these videos reassure me?

Frissons About Amazon's Health Care Announcement

I first came across the the word "frisson" just now as I was trying to think of a title for this blog post. A frisson is that shivering sensation you get when you are excited or scared. When I think about Amazon training its guns on health care, I feel both. 

My first thought after Amazon’s vague announcement from three weeks ago was that the platform will do with health care what it did with buying books: use data and algorithms to offer a cheaper, more convenient, and more intuitive experience to consumers. But my glee is tempered by trepidation about the implications for employment, equality, and privacy Amazon's domination of health care would entail.

To avoid offending any aspiring health care economists, I concede: I can’t tell you exactly how Amazon will succeed in health care, and there is a high probability that they will founder. Health care isn’t retail. Other tech companies have tried using algorithms and data to lower costs and create a superior customer experience, and been tripped up by the same incentive problems that hobble health care incumbents.

But I’m a true believer in the world-eating prowess of software. I would bet money on a new player using informational and computational superiority to become the intermediary for most health care transactions. With its money, expertise, and patience, Amazon could be the winning horse.

The health care space would probably employ fewer people if an Amazon interface becomes the intermediary for most transactions. Providers of health care products and services who want access to the demand Amazon controls would likely face brutal cost-cutting mandates. Hospitals, pharmaceutical companies, and medical device manufacturers would feel the Amazon squeeze, just like authors and publishers. Meanwhile, a lot of people employed in marketing sales, and administration wouldn’t be so necessary with Amazon sourcing patients the way Facebook sources web traffic for media companies.

As the employer of 17% of the workforce and a central character in America’s post-industrial economic story, even a small loss of jobs in health care would have huge ripple effects. It would mean that the internet’s transformation of the economy is moving out of beta and into general release. And it would be a full step towards the future I fear: where most economic activity is orchestrated through a sufficiently small number of platforms that you could fit their CEOs into a single smoke-filled room.

And those CEOs would be able to know everything about everyone. A dominant Amazon Health, in particular, would have access to the medical secrets of a huge proportion of Americans. All that data would probably lead to insights that improve health care. But I would be uncomfortable knowing that the same company who knows so much about my buying preferences knowing so much about my body. I’ll feel creeped out if Amazon puts cough drops in its “suggested items” box a day after I receive a strep diagnosis.  

Platforms are great at making consumers’ lives easier, and no American consumers need their lives made easier more than health care consumers. Amazon’s new mission has a high upside. But it feels like we’ve entered an era where the only way to solve hard problems is through the application of huge databases and powerful algorithms. And if those are the only tools we have, we’ll end up in a world where equality and privacy are difficult, if not impossible, to maintain.

Paying Bug Bounties Is Good, Right?

I keep getting this feeling when I read what most politicians say about issues that involve algorithms, data, and the internet: it’s not that their policies are necessarily bad, it’s that they sound like they have no idea what they’re talking about.

Case in point: US Senator Dick Blumenthal (D-CT), taking Uber to task last week  at a Commerce Committee hearing. Blumenthal, who I believe once reported to his constituents and the public that he had served in Vietnam, when in fact, he hadn’t, was livid that Uber had not reported a security vulnerability to its users.

The vulnerability in question was hardly an Equifax-level data breach. In 2016, a group of hackers contacted Uber. Apparently, the company had left certain critical login credentials lying around on Github, giving the hackers access to personal data on 25 million riders and drivers. Uber paid the hackers $100,000, fixed the vulnerability, and that was that.

At the hearing, Blumenthal put on a clinic in livid sanctimony. Paying off the “blackmailers,” he declared, was “morally wrong and reprehensible.”

Bear with me here, because at this point I’ve only read one and a half books about cybersecurity. But isn’t paying off “blackmailers” like this the best way to encourage hackers to use their craft for legitimate ends?

Every major software system has Death Star-level vulnerabilities that its makers don’t know about. Smart companies pay handsome sums to hackers that inform them of these vulnerabilities, so that they can be fixed before they are discovered by the bad guys. From what I understand, many people make a living this way. Characterizing them as “blackmailers” seems like accusing airplane safety inspectors of sabotage.

I should say that I haven’t had time to investigate this much on my own. Maybe the hackers who reported the breach to Uber really were bad guys who held the data for ransom. But if they were, I think they would have asked for a lot more than $100,000. And it seems a lot more likely that Blumenthal has a dangerously antiquated perspective on cybersecurity.

After all, in the past, many companies would be so aghast at having imperfections in their software pointed out that they would threaten to sue the hackers who found them. Many others did not have bounty programs, or if they did, paid out very little in rewards. People with hacking skills had no financial incentive to devote their craft towards legitimate ends.

Maybe Uber technically should have disclosed the incident, and maybe one of you can tell me why. But maybe they didn’t do it because it wasn’t nearly the dramatic, morally fraught episode as Blumenthal’s alarmist language makes it seem.

Hauling Uber in for a talking-to seems like punishing the system for working exactly as it should, given how unsecure all of the software and hardware we use really is. Somebody who has more than one and a half books’ worth of knowledge about cybersecurity, please weigh in.

Bad News About Fake News for Jeff Flake

“2017 was a year which saw the truth – objective, empirical, evidence-based truth – more battered and abused than any other in the history of our country.” So declared Senator Jeff Flake, the latest Republican to speak his mind about Trump only after euthanizing his political career, on the Senate floor a few weeks ago.

Senator Flake and I have no disagreements about 2017. But the decline of truth began before we elected a president who lies like a four year old, and it will continue long after he has retired to the big golf course in the sky. By empowering anyone to create, copy, and transmit text instantly and at massive scale, technology has made truth harder to distinguish from falsehood. As the power to fabricate extends beyond written information, telling what’s real from what isn’t will become even more difficult.

We’re supposed to evaluate truth based on standards of objectivity. We trust the New York Times, so we way, because we know they corroborate information from multiple sources, check every fact before publishing, and adhere to other practices constituting the standards of professional journalism.

But standards of objectivity or empiricism aren’t the only way people judge whether information is trustworthy. When it comes to text, aesthetic presentation matters too. If I receive a cease-and-desist letter from an attorney because I’ve been torrenting too much without a VPN, I’m much more likely to comply if it’s printed on fancy letterhead than if it’s scrawled on the back of a ketchupy napkin. Information printed on nice paper under gothic font has a lot more truthiness than a pamphlet of grainy photos on cheap newsprint handed out by a gaggle of Lyndon Larouche devotees.

Before digital platforms became a dominant source of information, the only institutions that could afford wide distribution were ones with high standards of ethics and objectivity. These were also the only institutions that could afford to print on nice paper under gothic font. The objective standards and aesthetic presentation, the intellectual and visceral truth signals, were in alignment. But today, a headline on Facebook from the San Francisco Chronicle looks the same as a headline written by two guys in a basement in Long Beach. The aesthetic signal is obsolete. Our truth radar has been scrambled.

OK, sure: for a given news item, it’s not that hard to look at it for two seconds, apply some additional standards beyond the aesthetics (does the headline utilize correct spelling?), and make a determination as to its trustworthiness that has a high probability of being correct. It’s deeply sad that many people around the world cannot or will not do this.

On the other hand, losing our visceral truth standard makes things a little harder, and in aggregate, a little is a lot. Using the intellectual standard for everything is like flying manual when we’ve been on autopilot for our whole lives. Looking into each source of information and independently verifying that it was collected in a trustworthy way takes more time than people will give, especially when they are under the avalanche of information that is a social feed.

Soon, even those of us willing to spell check a Facebook headline may find fact hard to distinguish from fiction. Algorithms won’t stop with text; in the future, it may very well be possible to fabricate photographs, too. A November study from Nvidia Research used generative adversarial networks (GANs) to fabricate shockingly real images of everyday scenes and objects – a bedroom interior, a cell phone, several churches, a few buses, and other stuff. The study also fabricated images designed to look like photographs of celebrities; the people in the images aren’t real, but they certainly look it. Have a look for yourself. It freaked me out to consider the lifelike images I was looking at are of things that don’t exist anywhere in the world.

Human recollections of events usually differ, and some people have always been comfortable lying on paper, but since their invention photographs have been recognized as definitive proof of what actually went down. Reading those studies, it’s easy to imagine reputations of celebrities being ruined after images them doing things they never did blow up on Twitter. In criminal proceedings, the credibility of photographs could be undermined as easily as a shaky witness. It’s hard to imagine a world where a photograph of something happening isn’t proof that it really happened, but it may be on the horizon.

How do we have a political discourse, criminal accountability, or shared culture without shared facts? And if there is no answer to that question, how do we maintain truths that are accepted across all cultural, social, and political divides when any kind of electronic information can be fabricated?

Good luck with that, Senator Flake.

In Defense Of Lines

One day last year, I was waiting in the security line at San Francisco International Airport. As I approached the clothing and accessory removal zone, I noticed a distinctly uncrowded section in the warren of ropelines marked “area for CLEAR subscribers.” What, I wondered, was CLEAR? As my girlfriend later informed me, CLEAR is a service in which you can enroll to cut the security line at airports. Anyone qualifies; it has nothing to do with your security risk level. You just have to be able to pay $180 per year.

Even though $180 per year is a reasonably modest sum, I noticed feelings of resentment when I regarded this enterprise. The security line is a shared struggle, a forgivable indignity visited upon all of us with obvious justification, in a public place. Now you can just pay money to worm your way out of it? It just felt wrong. And the more I thought about it, the more I thought that it comes at a real social and political cost. Lines are an equalizing force in society. We can all bitch together about waiting to get our bodies screened or waiting to get our drivers’ licenses renewed. Take away lines, you’ll accelerate the dwindling of shared experiences that make up our common culture.

When I was a kid in the 90s, I took pride and comfort in knowing that, while society had some true elites -- the private jet, multiple mansion people whose lives I gawked at on VHI -- and also some really poor folks, the vast majority of us were some variety of normal. Waiting in line supported the sense of a single, middle-class experience that existed for everyone even as actual incomes and life experiences varied greatly between the wealth extremes. If you were a Fortune 500 CEO, an actor, or a US Senator, maybe you didn’t have to wait in line for stuff. But even if you were a partner at a corporate law firm, sending your kids to private school, and taking vacations to Europe, you still had sit in traffic like everyone else, and wait in line to pick up groceries, vote, and board a plane.

Today, shared experiences are diminishing. We no longer all watch the same TV shows; I know this because apparently NCIS, which nobody I know has ever watched three seconds of or seen a single piece of content about on the Internet, is apparently the most-watched show in the country. Lamentations about media filter bubbles have become a liberal cocktail party cliche. The NFL, which until very recently was as ubiquitous and unifying as the Catholic Church in the 12th century, is becoming culturally polarized.

Like everything else, lines are giving way to this era’s gravitational forces of digitally-driven classification, differentiation, and discrimination. Getting a taxi in the rain used to be equally hard for everyone; now it’s hard only for those who can’t afford a 5x Uber surge. On freeways, people who pay dynamically-changing prices via devices on their windshields can increasingly bypass traffic, leaving people who can’t afford the fast lane with even slower commutes. And if you can afford it, you can skip to the front of airport security. The extension of data gathering and processing throughout our lives is making lines obsolete. Why should everyone wait together when a computer can measure everything about every person and divide them according to whatever qualities the computer’s owner sees fit?

Since Donald Trump was elected, the question everyone who writes or talks about politics for a living, and many who don’t, has been why? What’s the deal with this simmering rage that is so well documented but so hard for people who don’t feel it to understand?  

The reason most commentators feel safest giving is economic struggle. It’s true that wealthy people have been capturing most gains from economic growth since 2008. But most people have, materially, most of what they had in 1999, when everyone was just so excited about the Internet and the end of the Cold War. Commentators who are in touch with reality offer that the rage Trump supporters feel is simply racism.

I humbly submit that another factor, which walks hand-in-hand with outright racism, may be something so simple as the decline of lines. Waiting in line while other people cut really sucks. It’s the problem with going to nightclubs in New York: you stand in the cold while watching rich guys with foreign accents and wearing scarves go inside and have a great time. Watching that happen in increasing domains of daily life would be extremely frustrating for any human.

That sense I felt as a child, that most things were the same for most people, felt reassuring. It felt fair. That reassuring sense of unity has remained even as cultural polarization and media fragmentation have revealed that, despite Barack Obama’s thunderous declarations, there is no one United States of America. But it can’t endure under the in-your-face indignity of systematized line cutting. Losing the universality of lines means losing the visual reinforcement that we are all pretty much the same, even if we knew we never really were same to begin with.

I have a confession to make: a couple of months ago, I qualified for TSA precheck. Because there’s now enough data on me out there for the government to model with stunning accuracy whether I’m a security risk, I get to cut the security line at the airport. It feels so deliciously fortunate to leave my shoes on and my laptop in my bag while I stroll through the quaint 20th-century artifact that is the metal detector. But my smugness comes at a cost.

What will be the next line to fall by the wayside? The grocery checkout line? The Starbucks line? The line for the ski lift? If we value equality, we should have areas of life where we force everyone to wait in line, efficiency be damned.

Robots won't take your job, but algorithms will take your income

Many brilliant people that I deeply admire are concerned that automation will put the bulk of the human population out of work, forever. Here’s Yuval Noah Harari, talking on Ezra Klein’s podcast, back in March:

"[M]ost people tend to overestimate human beings. In order to replace most humans, the AI won't have to do very spectacular things. Most of the things the political and economic system needs from human beings are actually quite simple. We earlier talked about driving a taxi or diagnosing a disease. This is something that AI will soon be able to do better than humans even without consciousness, even without having emotions or feelings or super intelligence. Most humans today do very specific things that an AI will soon be able to do better than us.

What we are talking about in the 21st century is the possibility that most humans will lose their economic and political value. They will become a kind of massive useless class — useless not from the viewpoint of their mother or of their children, useless from the viewpoint of the economic and military and political system. Once this happens, the system also loses the incentive to invest in human beings."

Ezra disagrees. Here’s the core of his rebuttal to Harari, from an August piece in Vox:

"[T]he Industrial Revolution, and subsequent technological revolutions, really do feel relevant. A hundred years ago, or 400 years ago, people did much more useful jobs — huge swaths of the human race, for instance, were directly involved in the production of food and the collection of water.

Compared with those ancestors, humans today are a massive useless class. What sort of job is “editor of an explanatory journalism web site” next to “farmer”? Would our ancestors value the work of psychologists or customer service representatives or wedding planners or computer coders?

But this, to me, is the story of labor markets in the past few hundred years: As technology drives people out of the most necessary jobs, we invent less necessary jobs that we nevertheless imbue with profound meaning and even economic value."

In this point-counterpoint, I’m with Ezra. I don’t think humanity will suffer widespread unemployment due to automation anytime soon, short of development of AGI. So long as there is anything at all humans can do that machines can’t, we’ll find a way to pay people for it.

But that doesn’t put me at ease. I still look at automation with a really deep sense of foreboding. We’re already at a point where if you’re just starting your career, you want to live in one of America’s iconic cities and regularly go to brunch, it really helps to work the technology industry. That’s only going to get worse. I’m afraid that in the foreseeable future, if you want to earn enough money to live wherever you want, you’re going to have to either work in support of one of a few large, algorithm and data-centric software platforms, or be a superstar in your field.

Working in home care or as Youtuber, two growing job categories that require qualities of care or creativity that won’t be automated anytime soon, won’t get you to a standard of living and cultural experience relative to society overall that we today refer to as “middle class.” Home care workers get paid very little. Even the Youtubers who earn enough ad revenue off their content to pay their bills can see their livelihood evaporate in an instant due to demonetization, a copyright dispute, or some other algorithmic pitfall that appeared out of nowhere.

For a vision of the future, look to the recent past of the media industry. The point is frequently made that it’s really hard to make a decent living in media unless you are talented, catch a huge break, and get hired by one of, charitably, the 20 most powerful organizations. It used to be that you could buy a house and participate in America’s common culture writing for a local paper. The destruction of geographic boundaries and the instant and infinite copying and distribution of the best-quality products have done in that fantasy for hundreds of thousands of aspiring scribes.

Now, who gets paid in media? People who work for platforms (Facebook) or superstars (the New York Times).

Sure, there are hundreds or thousands more “publications” that were enabled by the internet that you can write for. You’re reading one right now. How much money do you think this blog makes? You get one guess.

Media was the platforms first casualty because its product, comprised of nothing more than information in highly copyable formats, could have its production and consumption completely re-imagined around the wave of information technology that included Google, email, and the laptop.

But this kind of dramatic re-ordering around platforms and superstars will continue as data sensors in communication with algorithms reach from our desks into our pockets and soon, into our cars, homes, hospital bathrooms -- pretty much anywhere we can think of.

There will always be jobs for people to do. But instead of a relatively unified American economic experience, I think we’ll have something else. At the top will be platform owners, whose algorithms and stores of data will orchestrate economic transactions. Just below them will be engineers who directly support the platforms, the salesmen and marketers who bring them into your lives, the lawyers that represent them in court, the doctors that treat their illnesses, and the finance people who manage their portfolios. At roughly the same level, there will be content creators -- people who are in the top tier of talent in their fields. And then, there will be everyone else: taking what platforms given them, all on the platforms’ terms.

We can’t restrict platforms’ growth. And we can’t deny the many positive impacts they have had on lives in this country and outside of it. But there is no part of our lives that information technology won’t reach. As that transition continues to unfold, I can’t see an endpoint that isn’t a massively worse version of the low unemployment, highly income and wealth polarized society we have today.

That’s the threat of automation as I see it. I’m grateful for all the attention that the possibility of robots taking jobs is garnering in media and political circles of late. But the sooner we move beyond hand-wringing about potentially massive unemployment and start figuring how how we can have a reasonably equal society when economic power is concentrated in platforms, the better.

Hillary Clinton's Narrow AI Take

Last week, worlds collided Hillary Clinton was quoted talking about AI. Per Jack Clark’s import AI newsletter:

First I felt a flurry of excitement that such a well-known political figure was calling attention to an issue I’ve been pining for well-known political figures to pay attention to. But after reading what she said (full transcript here), I was was vexed by how she discussed AI as a singular threat that can be met with a single policy. “AI” is a really broad technological concept with almost unlimited applications. Clinton talks about addressing it with a blue ribbon commission as if it were a comparably narrow problem to the growth in entitlement spending. This unfortunate way of thinking is very common. 

Treating AI like it’s a singular threat feels right to some people for a few reasons. First, it feels right to the uninitiated because it fits their vision of threatening AI as presented by Hollywood and furthered by news articles that reproduce still images from Terminator while theoretically discussing serious issues. It allows them to go down the familiar path of personifying AI.

Then there are those, many of whom are in the top 0.01% of AI expertise, who treat AI as a singular threat because there’s one potential threat that stands out as more urgent than all the rest. This group includes the many brilliant folks who are studying the existential risk of developing superintelligence. (I am not using “brilliant” sarcastically – they make a strong case and are much smarter than I am). They tend to think that this risk is so high and imminent that worrying about other AI threats is like worrying about the long-term impact of carriage horses on urban public health right before the car was invented. 

Then there are those who are horrified that we are enshrining our culture’s bias and exclusivity in the algorithms that are increasingly controlling our lives. Finally, as Clinton suggests, there are those who are certain that AI is soon going to supersede human abilities such that many people will suddenly lose their jobs.

But AI isn’t just one of any of these issues; it’s all of them, and more. The areas of policy for which AI has implications range from cybersecurity to warfare to transportation to employment and the safety net to policing and privacy, and yes, I believe at some point, existential risk. Smart consideration of the impact of AI should go beyond consideration of the machine learning algorithms and robots that are getting so much play on Twitter. It must be part of a more general discussion about the machine algorithms that are becoming central to our lives and the platforms that build and operate them. How is one blue ribbon commission supposed to come up with unified policy for all of that?

We don’t need an “AI law;” we need a massive uploading of knowledge and understanding about AI into the minds of people that are writing all the laws. I’m not sure how you do that. Maybe a really smart AI person might have an idea. Algorithms, and the AI breakthroughs that make them more valuable, aren’t a distinct area of policy. They are relevant to every policy.

In some sad way I’m gratified that AI policy would have been a focus of the Clinton administration. But she, and I suspect even the most forward-thinking public servants, still isn’t getting the scale of what’s happening.

The Farmer and the Historian: Mindfulness in the Era of the Algorithm

Most of the time, I’m really happy with technology, what it’s done for my life, and what it’s done for humanity. Adoption of technologies doesn’t necessarily lead to outcomes for society we would choose if we looked ahead a little bit. In reading the work of Japanese farmer-activist Masanobu Fukuoka and Israeli historian Yuval Noah Harari, both daily practitioners of mindfulness, I’ve come to believe that society needs to have a conscious and intentional thought process about this as algorithms grow in their power and ubiquity.

Fukuoka looks like a crazy hermit, with his scrawny beard and wispy gray hair waving down the side of his wrinkly dome as he stands in the middle of what looks like a field of weeds. As you might expect from a crazy hermit, Fukuoka has some crazy ideas. His thing is farming, and he hates the way we do it.

I don’t know anything about farming, but from a few misspent weeks playing Farmville and basic awareness of ancient history, it feels like plowing your fields is important. But according to Fukuoka’s 1970s manifesto, The One-Straw Revolution, plowing is useless. So are pruning, trimming, pesticides, herbicides, chemical fertilizer, genetic modification, or any technology that farming experts from the time of Hammurabi through today insist have improved yields and reduced hunger. My Mom tried to get me to weed our vegetable garden when I was a kid; I rarely did, but I never forgot her telling me that weeds are bad because they choke off vegetables from air, water, and sun. Not so, according to Fukuoka. Apparently, weeds provide ground cover, prevent erosion, and repel harmful insects, requiring the farmer to then spend time and money applying fertilizer and pesticides, which create problems of their own that need to be addressed. 

Instead, writes Fukuoka, farmers should practice “natural” or “do-nothing” agriculture: where farmers create, at scale, the natural conditions in which the ancestors of today’s crops would have grown before they were domesticated, and sit back and watch the plants grow. Fukuoka insists that his methods will yield just as much as fields that use scientific farming products and techniques. And natural farmers live lives of relative leisure. By disrupting the balance of nature with technology, modern farmers create problems that force them to work much harder for much less life satisfaction.

So it goes, writes Fukuoka, for the scientific advances that have enabled our consumption-driven lives and economic systems. Instead of helping humans achieve satisfaction, technologies create new problems while fueling a never-ending desire for more. Every adoption of a new production-boosting tool has furthered an ever-growing crisis of interference with a natural order, leaving us in constant distress. Fukuoka writes that under his ideal scenario, all but a very small number of humans would move to the countryside and undertake simple lives of cultivation and contemplation.

Based on subject matter, it’s weird to draw a connection from Fukuoka to Harari. In contrast to the latter’s lectures on hills and shacks and fields and dirt, Harari writes of consciousness-altering combat helmets, chemically quantified emotions, and new religions where a person’s value is based on how much data they produce for consumption by all-powerful algorithms. But though he doesn’t share Fukuoka’s hostility, he regards scientific progress in service of contemporary values with something close to vigilance.

Harari first hints at this in one of his first parables in his book Sapiens, about the domestication of wheat. Before agriculture, wheat plants struggled in life or death competition with weeds for air, water, and sunlight. At that time, humans survived by foraging and hunting for a few hours per day, leaving a lot of free time for napping, playing with children, and having sex. They moved homes constantly and ate a rich and varied diet. Then humans started killing the wheat’s competition. They started planting the wheat plants in neat rows and diverting rivers to provide them with water. Wheat’s life had never been so good. For humans, it seemed great: food consumption increased, and people had enough to follow their evolutionary instincts grow their families.

But over time, average quality of life plummeted. Taking care of wheat required people to perform backbreaking labor from dusk till dawn, and to stay in one place for most of their lives. Their new permanent communities became ideal breeding ground for infectious disease. Their diet became bland. Meanwhile, wheat plants lived lives of luxury and abundance relative to their ancestors. The upshot, by Harari’s telling, is that wheat domesticated humans, not the other way around.

I really can’t say that I’d rather be a prehistoric hunter-gatherer than a 21st century policy analyst in San Francisco. But Harari’s telling forces me to consider that the discovery of agriculture might have been a humanitarian catastrophe, no matter how successful it was at fulfilling our evolutionary drive. And despite the fact that Fukuoka’s assertions run counter to all evidence, his general ideas ring true in a similar way to Harari’s wheat parable. For all the basic human problems we’ve used technology to mostly solve in the last hundred years (hunger, early death from infectious disease) and radical increases in what we refer to as our standard of living, the world these technologies have enabled is home to mental, social, and ecological problems that we didn’t even imagine were possible before.

One of the most interesting things to know about Fukuoka and Harari when reading their work is how much both practice mindfulness in their daily lives. With its reliance on patience and non-intervention, “do-nothing” farming is essentially one giant meditation practice. And Harari meditates for two hours each day, and goes on a two-month silent retreat every year.

Mindfulness aligns perfectly with their writings about agriculture and technology. The idea behind mindfulness is that our constant, mindless reaction to impulse and striving for more causes suffering, and that by slowing down to observe our breath, thoughts, and feelings, we can intentionally make adjustments to how we think about and do things that increase satisfaction. Much like a meditation instructor guiding his students to become aware of their wandering minds and bring their attention back to the present, Harari and Fukuoka are encouraging all of humanity to be more conscious of our collective impulses, and more intentional about how we want to live, work, think, and relate to one another.

So far we humans have been uninhibited in our embrace of digital tools that allow us to do things much more quickly and efficiently than ever before, whether we’re taking an Uber to the next bar on a Friday night, ordering diapers and groceries on Amazon, or booking a hotel for a business trip. These are good things. We should absolutely not inhibit them. I’m not prepared to move to the mountains and become a natural farmer, like Fukuoka says I should. But we should be wary of company mission statements that make “ordering all the world’s information,” or “making the entire world connected” seem inherently good for the human experience.

Humanity’s powers of collective self awareness, reflection, and corrective action are much greater than they were when we decided by default that we’d like to toil in the fields all day and die of disease after eating thin porridge. We have the ability to study and understand how our choices to use technology affects us, and to make sure algorithms aren’t the next agriculture. We should use it.

How Automation Leads to Job Losses, and How It Doesn't

When people think about automation, they write a scary story in their heads. A business hums along, employing lots of people. A machine capability is developed that does what employees do, for less money. Cost-cutting bosses lay off all their employees. It’s all very  sudden and dramatic and confrontational. Things that make it easy to get worked up about.

I think the real story is different. Machines don’t replace humans. Machine-dependent businesses replace human-dependent businesses. To the extent that automation causes job losses, that’s how.

Think about travel agents. If the ability to communicate instantly over the internet to hotels and transportation companies around the world had only been available to travel agents and not to travelers, it might have enhanced their jobs, rather than replacing them. They would have been able to use time saved on international phone calls on client service, either taking on more clients or doing more thinking and research on behalf of existing clients.

But instant communication with hotels in Hungary and bus companies in Beijing wasn’t just offered as a tool to travel agents; it formed the basis Expedia, Orbitz, and Travelocity, examples of a new business model in the travel industry with low labor concentration that served travel agents’ erstwhile customers. By offering a more convenient and transparent user experience, they gobbled up travel agents’ marketshare. Employment of travel agents collapsed.

Retail is another example. Retail jobs have remained stagnant over the last few years as a direct result of information technology. But it’s not because JC Penney decided to close physical branches and fire workers and invest in e-commerce instead; it’s because legacy brands have been out-competed by Amazon and others that use information technology as the centerpiece of a new business model -- and employ far fewer people.

ATMs did completely substitute for the core tasks that tellers had been relied on to perform -- routine dispensing of cash. But adding ATMs didn’t change banks’ structure and business model, and they didn’t lead to the emergence of new IT-based services that competed for banking customers. As James Bessen has written, although ATMs reduced the number of tellers needed per bank branch, this made it cheaper for banks to open new branches. Banks used this efficiency boost to expand, and since their business still required some tellers to handle special transactions and serve in customer engagement roles, overall teller employment has increased as ATMs have been adopted.

Automation is confusing. There are so many contradictory theories and examples demonstrating why robots will or won’t take all of our jobs. At different times I’ve felt myself leaning in either direction. There’s really no safe conclusion than to throw up one’s hands and say, “only time will tell.”

But this theory of automation might be consistently applicable across the economy: when thinking about whether a new machine capability will lead to job losses, the critical variable is whether the new capability is introduced as a solution used to enhance the efficiency of existing business models, or used as the foundation of entirely new business models that employ fewer people than their legacy competitors. From my research so far, in the former case, jobs are usually enhanced more than they are replaced. But in the latter case, job losses ensue.  ATMs didn’t undermine the business of banks. But travel booking sites did undermine the entire business model of travel agents. Savings garnered from ATMs went to banks, so they could reinvest and expand. But savings garnered from travel sites went entirely to consumers.

I don’t think this means that we’re doomed to staggeringly high unemployment. As Ezra Klein has written, humans are really good at reinventing our idea of economically valuable labor. Even as retail jobs have remained stagnant and hundreds of thousands of travel agents have had to find other work, unemployment has remained low. Hundreds of thousands of jobs are created in industries like home care, whose core tasks will not be performable by machines for a long time.

But automation can still threaten widespread prosperity. When travel agents started disappearing, the businesses that replaced them didn’t employ nearly as many people. Most of the value lost by travel agents was transferred to consumers who traveled frequently. In the case of retail, value has transferred from retail workers to people who like to shop online. In both cases, the result of human-dependent businesses being replaced is modest gains for people who I hypothesize are pretty well off -- and wild gains for the very few people who build and maintain the businesses that did the replacing. New jobs that are created in other sectors either don’t pay well or require skills that few people have.

As more data gets produced, as more and different kinds of sensors are built to capture that data, as algorithms to analyze that data become more sophisticated, and as devices to act on what algorithms decide expand through our lives, we’re going to see algorithm-centric businesses replace human-centric ones in many, many more sectors of the economy. That’s what I’m worried about.

I wrote this theory off the top of my head after thinking about it this weekend. Meaning, there could very well be counter-examples to this hypothesis that prove me wrong, and because I’m busy, I haven’t had the time to think about what they might be. So if you can think of any cases where automation has caused unemployment WITHOUT the automating technology serving as the foundation of a new business model, where robots have directly replaced humans within the same business model, I’d love to hear it.

Books, TV, and Podcasts from September 2017

Books

Superintelligence, by Nick Bostrom: Finally dusted this off after it sat on my mental “need to read this” shelf for a few years. I thought this book would include technical information for why humans may create better-than-human artificial intelligence in the near future; but it’s really more like a hyper-detailed thought experiment. If we were to build superintelligence, how could that go wrong for humanity depending on how the it was designed? How can we ensure that the goals of AI programs align with human values? Nick Bostrom takes us on an exhaustive exploration of these and other hypotheticals, in a way that is dense, zany, and brilliantly prescient. However long it takes to build superintelligence, and I think it will be a pretty long time, the people then will be really glad that Bostrom wrote this book now. 


Podcasts

The Rewatchables: One of the most underrated aspects of continuing to pay up for a monthly Comcast package is the joy of stumbling upon old movies running on TNT, FX or another cable network on Friday and Saturday nights. Things you’ve seen a million times, that you usually wouldn't intentionally watch from start to finish, but really enjoy watching for half an hour or so. Well, now there’s a podcast designed to enhance and elongate that wonderful feeling. Bill Simmons and other Ringer writers go deep on Point Break, Silence of the Lambs, The Departed, and other movies that really never get old. 

TV

Curb Your Enthusiasm: This show is like maple sugar candy; decadent, cloying, you can only have a little piece at a time, but so, so good. Larry David’s grotesque behavioral contortions are matched only by how egregiously unreasonable everyone around him is. If you’ve been really busy and are struggling to switch to relaxation mode, the excruciating awkwardness of every interaction on this show will seize your attention in a way that most TV shows don’t. Really happy that new episodes are coming out tonight.

The Defiant Ones: If you’re like me and rapping along to Eminem, Dr. Dre, and 50 Cent were an indispensable part of not hating your life in middle school, this is a must-watch. Also if you’re a lot older than me and Fleetwood Mac, Bruce Springsteen, U2, Stevie Nix, or Tom Petty are exciting figures for you. This is a raw, rhythmically-edited visual documentation of the careers of Dre and Jimmy Iovine, his business partner, who together were instrumental in all of these artists’ careers, and many others comprising like 50% of global music culture since the late 1970s. Skip over the second half of the last installment, which is basically a commercial for Beats by Dre. 

Three Open-Ended Questions About the Privacy Endgame

I’m in the middle of writing a ponderous and poetic post about Masanobu Fukuoka and Yuval Noah Harari, and I thought I’d have it done by now, but it still doesn’t read quite right. So instead, I’m leaving you with an open-ended musing about what devices and software really do, and the long-term implications for privacy if we keep adopting new apps faster than we buy new clothes.

It seems to me that digital technologies are just tools of measurement and optimization. Facebook measures our social pleasure by tracking our likes and other activity on their network, and optimizes our interactions with friends to give us more of it. The Industrial Internet of Things allows manufacturers to measure data from every robot and sensor across an entire supply chain’s worth of production lines and change output levels in response to momentary fluctuations in demand, saving energy and increasing sales. Uber, when you think about it, is little more than mobile technology that measures and responds to people’s desire for a ride. And the hand hygiene compliance system at Stanford Children’s Hospital literally watches whether staff wash their hands properly through a network of cameras placed in every bathroom and hallway, and takes note if they do it wrong. The growth of software, its power, and our delight, is just a factor of how much information about our lives it’s been able to measure and analyze.

I think Uber was the coolest thing to be invented since the iPod, I think the Industrial IoT is a great idea for reducing energy consumption and promoting economic growth, and while I have a mixed relationship with Facebook, I do think it’s useful.  But the logical conclusion of the development of devices that measure more and more information, with algorithms that have increasing power to act on it – which is all that’s happened in the last 20 years of the digital explosion – is a world of perfect data transparency. It’s right there in Google and Facebook’s official mission statements: to “organize the world's information and make it universally accessible and useful” and “make the world more open and connected,” respectively (though Facebook made minor updates that mission statement recently) .  

This might sound a lot like the plot synopsis of The Circle, which I read was a really shitty movie, so I’m sorry about that; but I struggle with the tension between excitement about how wonderful all this measurement and optimization is, and dismay about how creepy and invasive it will have to become, if it hasn’t already. To me, Stanford's hand hygiene application is uncomfortably big-brothery.

And I wonder: will there come a time where we are offered a new application of information technology that promises another leap forward in the safety, convenience, and efficiency of some aspect of our lives, but decide that it’s not desirable because of the sacrifice to privacy it will require? Or will privacy fade away as a desired value? How, even theoretically, does the idea of privacy continue to exist if developments in machine learning and the Internet of Things proceed over the next 50 years as they have in the last 20?

I’ve been reading a ton about automation, but I feel like a good book on the theory of privacy in an algorithmic world is in order. Would love to hear suggestions.