Ice Lounge Media

Ice Lounge Media

It’s no secret that the current AI boom is using up immense amounts of energy. Now we have a better idea of how much. 

A new paper, from a team at the Harvard T.H. Chan School of Public Health, examined 2,132 data centers operating in the United States (78% of all facilities in the country). These facilities—essentially buildings filled to the brim with rows of servers—are where AI models get trained, and they also get “pinged” every time we send a request through models like ChatGPT. They require huge amounts of energy both to power the servers and to keep them cool. 

Since 2018, carbon emissions from data centers in the US have tripled. For the 12 months ending August 2024, data centers were responsible for 105 million metric tons of CO2, accounting for 2.18% of national emissions (for comparison, domestic commercial airlines are responsible for about 131 million metric tons). About 4.59% of all the energy used in the US goes toward data centers, a figure that’s doubled since 2018.

It’s difficult to put a number on how much AI in particular, which has been booming since ChatGPT launched in November 2022, is responsible for this surge. That’s because data centers process lots of different types of data—in addition to training or pinging AI models, they do everything from hosting websites to storing your photos in the cloud. However, the researchers say, AI’s share is certainly growing rapidly as nearly every segment of the economy attempts to adopt the technology.

“It’s a pretty big surge,” says Eric Gimon, a senior fellow at the think tank Energy Innovation, who was not involved in the research. “There’s a lot of breathless analysis about how quickly this exponential growth could go. But it’s still early days for the business in terms of figuring out efficiencies, or different kinds of chips.”

Notably, the sources for all this power are particularly “dirty.” Since so many data centers are located in coal-producing regions, like Virginia, the “carbon intensity” of the energy they use is 48% higher than the national average. The paper, which was published on arXiv and has not yet been peer-reviewed, found that 95% of data centers in the US are built in places with sources of electricity that are dirtier than the national average. 

There are causes other than simply being located in coal country, says Falco Bargagli-Stoffi, an author of the paper. “Dirtier energy is available throughout the entire day,” he says, and plenty of data centers require that to maintain peak operation 24-7. “Renewable energy, like wind or solar, might not be as available.” Political or tax incentives, and local pushback, can also affect where data centers get built.  

One key shift in AI right now means that the field’s emissions are soon likely to skyrocket. AI models are rapidly moving from fairly simple text generators like ChatGPT toward highly complex image, video, and music generators. Until now, many of these “multimodal” models have been stuck in the research phase, but that’s changing. 

OpenAI released its video generation model Sora to the public on December 9, and its website has been so flooded with traffic from people eager to test it out that it is still not functioning properly. Competing models, like Veo from Google and Movie Gen from Meta, have still not been released publicly, but if those companies follow OpenAI’s lead as they have in the past, they might be soon. Music generation models from Suno and Udio are growing (despite lawsuits), and Nvidia released its own audio generator last month. Google is working on its Astra project, which will be a video-AI companion that can converse with you about your surroundings in real time. 

“As we scale up to images and video, the data sizes increase exponentially,” says Gianluca Guidi, a PhD student in artificial intelligence at University of Pisa and IMT Lucca, who is the paper’s lead author. Combine that with wider adoption, he says, and emissions will soon jump. 

One of the goals of the researchers was to build a more reliable way to get snapshots of just how much energy data centers are using. That’s been a more complicated task than you might expect, given that the data is dispersed across a number of sources and agencies. They’ve now built a portal that shows data center emissions across the country. The long-term goal of the data pipeline is to inform future regulatory efforts to curb emissions from data centers, which are predicted to grow enormously in the coming years. 

“There’s going to be increased pressure, between the environmental and sustainability-conscious community and Big Tech,” says Francesca Dominici, director of the Harvard Data Science Initiative and another coauthor. “But my prediction is that there is not going to be regulation. Not in the next four years.”

Read more

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How Silicon Valley is disrupting democracy

The internet loves a good neologism, especially if it can capture a purported vibe shift or explain a new trend. In 2013, the columnist Adrian Wooldridge coined a word that eventually did both. Writing for the Economist, he warned of the coming “techlash,” a revolt against Silicon Valley’s rich and powerful fueled by the public’s growing realization that these “sovereigns of cyberspace” weren’t the benevolent bright-future bringers they claimed to be.

While Wooldridge didn’t say precisely when this techlash would arrive, it’s clear today that a dramatic shift in public opinion toward Big Tech and its leaders did in fact ­happen—and is arguably still happening.

Two new books serve as excellent reminders of why it started in the first place. Together, they chronicle the rise of an industry that is increasingly using its unprecedented wealth and power to undermine democracy, and they outline what we can do to start taking some of that power back. Read the full story.

—Bryan Gardiner

This story is from the forthcoming magazine edition of MIT Technology Review, set to go live on January 6—it’s all about the exciting breakthroughs happening in the world right now. If you don’t already, subscribe to receive a copy.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Google has unveiled a new headset and smart glasses OS
Android XR gives wearers hands-free control thanks to the firm’s Gemini chatbot. (The Verge)
+ It also revealed a new Samsung-build headset called Project Moohan. (WP $)
+ Google’s hoping to learn from mistakes it made with Google Glass a decade ago. (Wired $)
+ Its new Project Astra could be generative AI’s killer app. (MIT Technology Review)

2 The US and UK are on a AI regulation collision course
Donald Trump’s approach to policing AI is in stark contrast to what the UK is planning. (FT $)
+ The new US FTC chair favors a light regulatory touch. (Reuters)
+ How’s AI self-regulation going? (MIT Technology Review)

3 We don’t quite know what’s causing a global temperature spike
But scientists agree that we should be worried. (New Yorker $)
+ The average global temperature could drop slightly next year, though. (New Scientist $)
+ Who’s to blame for climate change? It’s surprisingly complicated. (MIT Technology Review)

4 Trump’s administration is filling up with tech insiders
More venture capitalists and officials are likely to join their ranks. (The Information $)
+ These crypto kingpins will be keeping a close eye on proceedings. (FT $)

5 What happened after West Virginia revoked access to obesity drugs
Teachers and state workers struggled after a pilot drugs program was deemed too expensive. (The Atlantic $)
+ Weight-loss injections have taken over the internet. But what does this mean for people IRL? (MIT Technology Review)

6 Would you buy a car from Amazon?
The e-retail giant wants you to sidestep the dealership and purchase from it directly. (Wired $)
+ While it’s limited to Hyundai models, other manufacturers will follow. (Forbes $)

7 Silicon Valley’s perks culture is largely dead
No more free massages or artisanal chocolate, sob. (NYT $)

8 AI is teaching us more about the Berlin Wall’s murals
From the kinds of paint used, to application techniques. (Ars Technica)

9 For $69, you can invest in a rare stegosaurus skeleton
The rare fossil is a pretty extreme example of an alternative investment. (Fast Company $)
+ New Yorkers can swing by the American Museum of Natural History to see it. (AP News)

10 This New Jersey politician faked his Spotify Wrapped
To hide his children’s results and make him appear a bigger Bruce Springsteen fan. (Billboard $)
+ What would The Boss himself make of the controversy? (WP $)

Quote of the day

“It could be far worse than any challenge we’ve previously encountered — and far beyond our capacity to mitigate.”

—Jack Szostak, a professor in the University of Chicago’s chemistry department, tells the Financial Times about the unprecedented danger posed by synthetic bacteria.

The big story

A brief, weird history of brainwashing

April 2024

On a spring day in 1959, war correspondent Edward Hunter testified before a US Senate subcommittee investigating “the effect of Red China Communes on the United States.”

Hunter introduced them to a supposedly scientific system for changing people’s minds, even making them love things they once hated.

Much of it was baseless, but Hunter’s sensational tales still became an important part of the disinformation that fueled a “mind-control race”, with the US government pumping millions of dollars into research on brain manipulation during the Cold War.

But while the science never exactly panned out, residual beliefs fostered by this bizarre conflict continue to play a role in ideological and scientific debates to this day. Read the full story.

—Annalee Newitz

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Deep down in the depths of the Atacama Trench, a new crustacean has been discovered.
+ Living in this picturesque Antarctic settlement comes with a catch—you have to have your appendix removed before you can move in.
+ Just when you thought sweet potato couldn’t get any better, it turns out it makes pretty tasty macaroons.
+ If you’re looking to introduce kids to the joy of sci-fi, these movies are a great place to start.

Read more

The internet loves a good neologism, especially if it can capture a purported vibe shift or explain a new trend. In 2013, the columnist Adrian Wooldridge coined a word that eventually did both. Writing for the Economist, he warned of the coming “techlash,” a revolt against Silicon Valley’s rich and powerful fueled by the public’s growing realization that these “sovereigns of cyberspace” weren’t the benevolent bright-future bringers they claimed to be. 

While Wooldridge didn’t say precisely when this techlash would arrive, it’s clear today that a dramatic shift in public opinion toward Big Tech and its leaders did in fact ­happen—and is arguably still happening. Say what you will about the legions of Elon Musk acolytes on X, but if an industry and its executives can bring together the likes of Elizabeth Warren and Lindsey Graham in shared condemnation, it’s definitely not winning many popularity contests.   

To be clear, there have always been critics of Silicon Valley’s very real excesses and abuses. But for the better part of the last two decades, many of those voices of dissent were either written off as hopeless Luddites and haters of progress or drowned out by a louder and far more numerous group of techno-optimists. Today, those same critics (along with many new ones) have entered the fray once more, rearmed with popular Substacks, media columns, and—increasingly—book deals.

Two of the more recent additions to the flourishing techlash genre—Rob Lalka’s The Venture Alchemists: How Big Tech Turned Profits into Power and Marietje Schaake’s The Tech Coup: How to Save Democracy from Silicon Valley—serve as excellent reminders of why it started in the first place. Together, the books chronicle the rise of an industry that is increasingly using its unprecedented wealth and power to undermine democracy, and they outline what we can do to start taking some of that power back.

Lalka is a business professor at Tulane University, and The Venture Alchemists focuses on how a small group of entrepreneurs managed to transmute a handful of novel ideas and big bets into unprecedented wealth and influence. While the names of these demigods of disruption will likely be familiar to anyone with an internet connection and a passing interest in Silicon Valley, Lalka also begins his book with a page featuring their nine (mostly) young, (mostly) smiling faces. 

There are photos of the famous founders Mark Zuckerberg, Larry Page, and Sergey Brin; the VC funders Keith Rabois, Peter Thiel, and David Sacks; and a more motley trio made up of the disgraced former Uber CEO Travis Kalanick, the ardent eugenicist and reputed father of Silicon Valley Bill Shockley (who, it should be noted, died in 1989), and a former VC and the future vice president of the United States, JD Vance.

To his credit, Lalka takes this medley of tech titans and uses their origin stories and interrelationships to explain how the so-called Silicon Valley mindset (mind virus?) became not just a fixture in California’s Santa Clara County but also the preeminent way of thinking about success and innovation across America.

This approach to doing business, usually cloaked in a barrage of cringey innovation-speak—disrupt or be disrupted, move fast and break things, better to ask for forgiveness than permission—can often mask a darker, more authoritarian ethos, according to Lalka. 

One of the nine entrepreneurs in the book, Peter Thiel, has written that “I no longer believe that freedom and democracy are compatible” and that “competition [in business] is for losers.” Many of the others think that all technological progress is inherently good and should be pursued at any cost and for its own sake. A few also believe that privacy is an antiquated concept—even an illusion—and that their companies should be free to hoard and profit off our personal data. Most of all, though, Lalka argues, these men believe that their newfound power should be unconstrained by governments, ­regulators, or anyone else who might have the gall to impose some limitations.

Where exactly did these beliefs come from? Lalka points to people like the late free-market economist Milton Friedman, who famously asserted that a company’s only social responsibility is to increase profits, as well as to Ayn Rand, the author, philosopher, and hero to misunderstood teenage boys everywhere who tried to turn selfishness into a virtue. 

cover of Venture Alchemists
The Venture Alchemists: How Big Tech Turned Profits into Power
Rob Lalka
COLUMBIA BUSINESS SCHOOL PUBLISHING, 2024

It’s a somewhat reductive and not altogether original explanation of Silicon Valley’s libertarian inclinations. What ultimately matters, though, is that many of these “values” were subsequently encoded into the DNA of the companies these men founded and funded—companies that today shape how we communicate with one another, how we share and consume news, and even how we think about our place in the world. 

The Venture Alchemists is strongest when it’s describing the early-stage antics and on-campus controversies that shaped these young entrepreneurs or, in many cases, simply reveal who they’ve always been. Lalka is a thorough and tenacious researcher, as the book’s 135 pages of endnotes suggest. And while nearly all these stories have been told before in other books and articles, he still manages to provide new perspectives and insights from sources like college newspapers and leaked documents. 

One thing the book is particularly effective at is deflating the myth that these entrepreneurs were somehow gifted seers of (and investors in) a future the rest of us simply couldn’t comprehend or predict. 

Sure, someone like Thiel made what turned out to be a savvy investment in Facebook early on, but he also made some very costly mistakes with that stake. As Lalka points out, Thiel’s Founders Fund dumped tens of millions of shares shortly after Facebook went public, and Thiel himself went from owning 2.5% of the company in 2012 to 0.000004% less than a decade later (around the same time Facebook hit its trillion-dollar valuation). Throw in his objectively terrible wagers in 2008, 2009, and beyond, when he effectively shorted what turned out to be one of the longest bull markets in world history, and you get the impression he’s less oracle and more ideologue who happened to take some big risks that paid off. 

One of Lalka’s favorite mantras throughout The Venture Alchemists is that “words matter.” Indeed, he uses a lot of these entrepreneurs’ own words to expose their hypocrisy, bullying, juvenile contrarianism, casual racism, and—yes—outright greed and self-interest. It is not a flattering picture, to say the least. 

Unfortunately, instead of simply letting those words and deeds speak for themselves, Lalka often feels the need to interject with his own, frequently enjoining readers against ­finger-pointing or judging these men too harshly even after he’s chronicled their many transgressions. Whether this is done to try to convey some sense of objectivity or simply to remind readers that these entrepreneurs are complex and complicated men making difficult decisions, it doesn’t work. At all. 

For one thing, Lalka clearly has his own strong opinions about the behavior of these entrepreneurs—opinions he doesn’t try to disguise. At one point in the book he suggests that Kalanick’s alpha-male, dominance-at-any-cost approach to running Uber is “almost, but not quite” like rape, which is maybe not the comparison you’d make if you wanted to seem like an arbiter of impartiality. And if he truly wants readers to come to a different conclusion about these men, he certainly doesn’t provide many reasons for doing so. Simply telling us to “judge less, and discern more” seems worse than a cop-out. It comes across as “almost, but not quite” like victim-blaming—as if we’re somehow just as culpable as they are for using their platforms and buying into their self-mythologizing. 

“In many ways, Silicon Valley has become the antithesis of what its early pioneers set out to be.”

Marietje Schaake

Equally frustrating is the crescendo of empty platitudes that ends the book. “The technologies of the future must be pursued thoughtfully, ethically, and cautiously,” Lalka says after spending 313 pages showing readers how these entrepreneurs have willfully ignored all three adverbs. What they’ve built instead are massive wealth-creation machines that divide, distract, and spy on us. Maybe it’s just me, but that kind of behavior seems ripe not only for judgment, but also for action.

So what exactly do you do with a group of men seemingly incapable of serious self-reflection—men who believe unequivocally in their own greatness and who are comfortable making decisions on behalf of hundreds of millions of people who did not elect them, and who do not necessarily share their values?

You regulate them, of course. Or at least you regulate the companies they run and fund. In Marietje Schaake’s The Tech Coup, readers are presented with a road map for how such regulation might take shape, along with an eye-opening account of just how much power has already been ceded to these corporations over the past 20 years.

There are companies like NSO Group, whose powerful Pegasus spyware tool has been sold to autocrats, who have in turn used it to crack down on dissent and monitor their critics. Billionaires are now effectively making national security decisions on behalf of the United States and using their social media companies to push right-wing agitprop and conspiracy theories, as Musk does with his Starlink satellites and X. Ride-sharing companies use their own apps as propaganda tools and funnel hundreds of millions of dollars into ballot initiatives to undo laws they don’t like. The list goes on and on. According to Schaake, this outsize and largely unaccountable power is changing the fundamental ways that democracy works in the United States. 

“In many ways, Silicon Valley has become the antithesis of what its early pioneers set out to be: from dismissing government to literally taking on equivalent functions; from lauding freedom of speech to becoming curators and speech regulators; and from criticizing government overreach and abuse to accelerating it through spyware tools and opaque algorithms,” she writes.

Schaake, who’s a former member of the European Parliament and the current international policy director at Stanford University’s Cyber Policy Center, is in many ways the perfect chronicler of Big Tech’s power grab. Beyond her clear expertise in the realms of governance and technology, she’s also Dutch, which makes her immune to the distinctly American disease that seems to equate extreme wealth, and the power that comes with it, with virtue and intelligence. 

This resistance to the various reality-distortion fields emanating from Silicon Valley plays a pivotal role in her ability to see through the many justifications and self-serving solutions that come from tech leaders themselves. Schaake understands, for instance, that when someone like OpenAI’s Sam Altman gets in front of Congress and begs for AI regulation, what he’s really doing is asking Congress to create a kind of regulatory moat between his company and any other startups that might threaten it, not acting out of some genuine desire for accountability or governmental guardrails. 

cover of The Tech Coup
The Tech Coup:
How to Save Democracy
from Silicon Valley

Marietje Schaake
PRINCETON UNIVERSITY PRESS, 2024

Like Shoshana Zuboff, the author of The Age of Surveillance Capitalism, Schaake believes that “the digital” should “live within democracy’s house”—that is, technologies should be developed within the framework of democracy, not the other way around. To accomplish this realignment, she offers a range of solutions, from banning what she sees as clearly antidemocratic technologies (like face-recognition software and other spyware tools) to creating independent teams of expert advisors to members of Congress (who are often clearly out of their depth when attempting to understand technologies and business models). 

Predictably, all this renewed interest in regulation has inspired its own backlash in recent years—a kind of “tech revanchism,” to borrow a phrase from the journalist James Hennessy. In addition to familiar attacks, such as trying to paint supporters of the techlash as somehow being antitechnology (they’re not), companies are also spending massive amounts of money to bolster their lobbying efforts. 

Some venture capitalists, like LinkedIn cofounder Reid Hoffman, who made big donations to the Kamala Harris presidential campaign, wanted to evict Federal Trade Commission chair Lina Khan, claiming that regulation is killing innovation (it isn’t) and removing the incentives to start a company (it’s not). And then of course there’s Musk, who now seems to be in a league of his own when it comes to how much influence he may exert over Donald Trump and the government that his companies have valuable contracts with.

What all these claims of victimization and subsequent efforts to buy their way out of regulatory oversight miss is that there’s actually a vast and fertile middle ground between simple techno­-optimism and techno-skepticism. As the New Yorker contributor Cal Newport and others have noted, it’s entirely possible to support innovations that can significantly improve our lives without accepting that every popular invention is good or inevitable. 

Regulating Big Tech will be a crucial part of leveling the playing field and ensuring that the basic duties of a democracy can be fulfilled. But as both Lalka and Schaake suggest, another battle may prove even more difficult and contentious. This one involves undoing the flawed logic and cynical, self-serving philosophies that have led us to the point where we are now. 

What if we admitted that constant bacchanals of disruption are in fact not all that good for our planet or our brains? What if, instead of “creative destruction,” we started fetishizing stability, and in lieu of putting “dents in the universe,” we refocused our efforts on fixing what’s already broken? What if—and hear me out—we admitted that technology might not be the solution to every problem we face as a society, and that while innovation and technological change can undoubtedly yield societal benefits, they don’t have to be the only measures of economic success and quality of life? 

When ideas like these start to sound less like radical concepts and more like common sense, we’ll know the techlash has finally achieved something truly revolutionary. 

Bryan Gardiner is a writer based in Oakland, California.

Read more
1 6 7 8 9 10 2,503