Three Open-Ended Questions About the Privacy Endgame

I’m in the middle of writing a ponderous and poetic post about Masanobu Fukuoka and Yuval Noah Harari, and I thought I’d have it done by now, but it still doesn’t read quite right. So instead, I’m leaving you with an open-ended musing about what devices and software really do, and the long-term implications for privacy if we keep adopting new apps faster than we buy new clothes.

It seems to me that digital technologies are just tools of measurement and optimization. Facebook measures our social pleasure by tracking our likes and other activity on their network, and optimizes our interactions with friends to give us more of it. The Industrial Internet of Things allows manufacturers to measure data from every robot and sensor across an entire supply chain’s worth of production lines and change output levels in response to momentary fluctuations in demand, saving energy and increasing sales. Uber, when you think about it, is little more than mobile technology that measures and responds to people’s desire for a ride. And the hand hygiene compliance system at Stanford Children’s Hospital literally watches whether staff wash their hands properly through a network of cameras placed in every bathroom and hallway, and takes note if they do it wrong. The growth of software, its power, and our delight, is just a factor of how much information about our lives it’s been able to measure and analyze.

I think Uber was the coolest thing to be invented since the iPod, I think the Industrial IoT is a great idea for reducing energy consumption and promoting economic growth, and while I have a mixed relationship with Facebook, I do think it’s useful.  But the logical conclusion of the development of devices that measure more and more information, with algorithms that have increasing power to act on it – which is all that’s happened in the last 20 years of the digital explosion – is a world of perfect data transparency. It’s right there in Google and Facebook’s official mission statements: to “organize the world's information and make it universally accessible and useful” and “make the world more open and connected,” respectively (though Facebook made minor updates that mission statement recently) .  

This might sound a lot like the plot synopsis of The Circle, which I read was a really shitty movie, so I’m sorry about that; but I struggle with the tension between excitement about how wonderful all this measurement and optimization is, and dismay about how creepy and invasive it will have to become, if it hasn’t already. To me, Stanford's hand hygiene application is uncomfortably big-brothery.

And I wonder: will there come a time where we are offered a new application of information technology that promises another leap forward in the safety, convenience, and efficiency of some aspect of our lives, but decide that it’s not desirable because of the sacrifice to privacy it will require? Or will privacy fade away as a desired value? How, even theoretically, does the idea of privacy continue to exist if developments in machine learning and the Internet of Things proceed over the next 50 years as they have in the last 20?

I’ve been reading a ton about automation, but I feel like a good book on the theory of privacy in an algorithmic world is in order. Would love to hear suggestions.