This piece is from our forthcoming mortality-themed issue, available from 26 October. If you want to read it when it comes out, you can subscribe to MIT Technology Review for as little as $80 a year.
On a recent evening, I sat at home scrolling through my Twitter feed, which—since I’m a philosopher who studies AI and data—is always filled with the latest tech news. After a while, I noticed a heaviness growing in the pit of my stomach, that telltale sign that you are not having a good time. But why? I wasn’t reading news about politics, or the climate crisis, or the pandemic—the usual sources of doomscrolling ennui. I stopped and reflected for a moment. What had I just been looking at?
I had blinked at the aesthetic poverty of the most recent pitch for Meta’s Horizon Worlds VR game, featuring Mark Zuckerberg’s dead-eyed cartoon avatar against a visual background that one Twitter wag charitably compared to “the painted walls of an abandoned day-care center.” I had let out a quiet sigh at the news of Ring Nation, an Amazon-produced TV show featuring “lighthearted viral content” captured from the Ring surveillance empire. I had clenched my jaw at a screenshot of the Stable Diffusion text-to-image model offering up AI artworks in the styles of dozens of unpaid human artists, whose collective labor had been poured into the model’s training data, ground up, and spit back out.
I recognized the feeling and I knew its name. It was resignation—that feeling of being stuck in a place you don’t want to be but can’t leave. I was struck by the irony that I studied technology my whole life in order to avoid this kind of feeling. Tech used to be my happy place.
Naturally, I poured my emotion into a tweetstorm:
I struck a nerve. As my notifications started blowing up and thousands of replies and retweets started pouring in, the initial dopamine reward for virality gave way to a deeper sadness. A lot of people were sitting with that same heavy feeling in their stomach.
Still, there was catharsis in reading so many others give voice to it.
Something is missing from our lives, and from our technology. Its absence is feeding a growing unease being voiced by many who work in tech or study it. It’s what drives the new generation of PhD and postdoctoral researchers I work with at the University of Edinburgh, who are drawing together knowledge from across the technical arts, sciences, and humanistic disciplines to try to figure out what’s gone awry with our tech ecosystem and how to fix it. To do that, we have to understand how and why the priorities in that ecosystem have changed.
The goal of consumer tech development used to be pretty simple: design and build something of value to people, giving them a reason to buy it. A new refrigerator is shiny, cuts down on my energy bills, makes cool-looking ice cubes. So I buy it. Done. A Roomba promises to vacuum the cat hair from under my sofa while I take a nap. Sold! But this vision of tech is increasingly outdated. It’s not enough for a refrigerator to keep food cold; today’s version offers cameras and sensors that can monitor how and what I’m eating, while the Roomba can now send a map of my house to Amazon.
The issue here goes far beyond the obvious privacy risks. It’s a sea change in the entire model for innovation and the incentives that drive it. Why settle for a single profit-taking transaction for the company when you can instead design a product that will extract a monetizable data stream from every buyer, returning revenue to the company for years? Once you’ve captured that data stream, you’ll protect it, even to the disadvantage of your customer. After all, if you buy up enough of the market, you can well afford to endure your customers’ anger and frustration. Just ask Mark Zuckerberg.
It’s not just consumer tech and social media platforms that have made this shift. The large ag-tech brand John Deere, for example, formerly beloved by its customers, is fighting a “right to repair” movement driven by farmers angry at being forbidden to fix their own machines, lest they disturb the proprietary software sending high-value data on the farmers’ land and crops back to the manufacturer. As more than one commenter on my Twitter thread noted, today in tech we are the product, not the prime beneficiary. The mechanical devices that used to be the product are increasingly just the middlemen.
There’s also a shift in who tech innovations today are for. Several respondents objected to my thread by drawing attention to today’s vibrant market in new tech for “geeks” and “nerds”—Raspberry Pis, open-source software tools, programmable robots. As great as many of these are for those with the time, skills, and interest to put them to use, they are tools made for a narrow audience. The thrill of seeing genuine innovation in biomedical technology, such as mRNA vaccines, is likewise dampened when we see the benefits concentrated in the wealthiest countries—the ones already best served by tech.
Of course, new technology remains a source of joy and excitement in many places that have historically been denied an equitable share of its comforts. But innovation used to promise us much more than new devices and apps. Engineering and inventing were once professions primarily oriented toward creating more livable infrastructure, rather than disposable stuff.
Vital technologies like roads, power grids, sewers, and transit systems used to be a central part of the engineering enterprise in the US. Nowadays, we treat them as taxpayer burdens, and our best minds and resources are funneled instead into data-hungry consumer devices and apps. If the US is any indicator of the trajectory of global technology development, then deep trouble lies ahead for us all, because we have clearly lost the plot.
The fact is, the visible focus of tech culture is no longer on expanding the frontiers of humane innovation—innovation that serves us all. Even space travel has lost its humanistic vision; today’s frontier is luxury space tourism and billionaires selling credulous investors on fantasies of escape to Mars. With 8 billion people teetering on the precipice of global environmental destruction, we can’t afford a world where the core mission of new tech appears to be “Take the money and run.”
If we continue to turn away from humane applications of tech, we risk feeding a runaway feedback loop that drains our collective will to reinvest in their expansion. The danger is not only that today’s technology fails to be directed to our most urgent civilizational needs. It’s that technologists’ apparent loss of interest in humane innovation is depleting our collective faith in our own powers of invention.
When it stays true to its deepest roots, technology is still driven by a moral impulse: the impulse to construct places, tools, and techniques that can help humans not only survive but flourish together. Of course, that impulse is easily joined to, or pushed aside by, others: the impulses to dominate, exterminate, immiserate, surveil, and control.
But those darker motivations aren’t at the heart of our technological capacity as a species. And we can’t let them define the modern technological order. Because if technology loses its association with shared joy and comfort, we risk becoming alienated from one of the most fundamental ways we care for the world and one another.
Shannon Vallor is the Baillie Gifford Professor of Ethics of Data and Artificial Intelligence at the University of Edinburgh and director of the Centre for Technomoral Futures in the Edinburgh Futures Institute.