Ice Lounge Media

Ice Lounge Media

AI’s emissions are about to skyrocket even further

It’s no secret that the current AI boom is using up immense amounts of energy. Now we have a better idea of how much. 

A new paper, from a team at the Harvard T.H. Chan School of Public Health, examined 78% of all data centers in the country in the US. These facilities—essentially buildings filled to the brim with rows of servers—are where AI models get trained, and they also get “pinged” every time we send a request through models like ChatGPT. They require huge amounts of energy both to power the servers and to keep them cool. 

Since 2018, carbon emissions from data centers in the US have tripled. It’s difficult to put a number on how much AI in particular is responsible for this surge. But AI’s share is certainly growing rapidly as nearly every segment of the economy attempts to adopt the technology.

Read the full story.


Google’s big week was a flex for the power of big tech

Google has been speeding toward the holiday by shipping or announcing a flurry of products and updates. The combination of stuff here is pretty monumental, not just for a single company, but I think because it speaks to the power of the technology industry—even if it does trigger a personal desire that we could do more to harness that power and put it to more noble uses. Read more here.

This story originally appeared in The Debrief with Mat Honan, our weekly take on what’s really going on behind the biggest tech headlines. The story is subscriber-only so nab a subscription too, if you haven’t already! Or you can sign up to the newsletter for free to get the next edition in your inbox on Friday.


The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1  Mysterious drones have been spotted along the US east coast

People are getting a bit freaked out, to say the least. (BBC)

  • Although sometimes they’re just small planes, authorities say. (Wired)
  • Trump says they should be shot down. (Politico)

2 TikTok could be gone from app stores by January 19

Last week, a US appeals court upheld a law forcing Bytedance to divest. (Reuters)

  • The rationale behind the ban could open the door to other regulations that suppress speech. (Atlantic)
  • Influencers are putting together their post-TikTok plans. (Business Insider)
  • The long-shot plan to save TikTok. (Verge)
  • The depressing truth about the coming ban. (MIT Technology Review)

3 Authorities in Serbia are using phone-cracking tools to install spyware

Activists and journalists found their phone had been tampered with after a run-in with police. (404 Media)

4 Cellphone videos are fueling violence inside US schools

Students are using phones to arrange, provoke and capture brawls in the corridors. (NYT)

5 AI search startup Perplexity says it will generate $10.5 million a month next year
It’s in talks to raise money at a $9 billion valuation. (The Information)

6 How Musk’s partnership with Trump could influence science

Even if he can’t cut as much as he’d like, he still stands to make big changes. (Nature)

7 AI firms will scour the globe looking for cheap energy

Low-cost power is an absolute priority. (Wired)

  • It’s an insatiably hungry industry. (Bloomberg)

8 Anthropic’s Claude is winning the chatbot battle for tech insiders

It’s not as big as ChatGPT, but it’s got a special something that people like. (NYT)

  • A new Character.ai chatbot for teens will no longer talk romance. (Verge)
  • How to trust what a chatbot says. (MIT Technology Review)

9 The reaction to the UnitedHealthcare CEO’s murder could prompt a reckoning

Healthcare’s algorithmic decision-making turns us into numbers on a spreadsheets. (Vanity Fair)

  • Luigi Mangione has to mean something. (Atlantic)

10 How China’s satellite megaprojects are challenging Starlink

Between them, Qianfan, Guo Wang and Honghu-3 could have as many satellites. (CNBC)


Quote of the day

“We’ve achieved peak data and there’ll be no more.”

OpenAI’s cofounder and former chief scientist, Ilya Sutskever, tells the NeurIPS conference that the way AI models will be trained will have to change.


The big story

How to stop a state from sinking

April 2024

In a 10-month span between 2020 and 2021, southwest Louisiana saw five climate-related disasters, including two destructive hurricanes. As if that wasn’t bad enough, more storms are coming, and many areas are not prepared.

But some government officials and state engineers are hoping there is an alternative: elevation. The $6.8 billion Southwest Coastal Louisiana Project is betting that raising residences by a few feet, coupled with extensive work to restore coastal boundary lands, will keep Louisianans in their communities.

Ultimately, it’s something of a last-ditch effort to preserve this slice of coastline, even as some locals pick up and move inland and as formal plans for managed retreat become more popular in climate-­vulnerable areas across the country and the rest of the world. Read the full story.

—Xander Peters


We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ How to make the most of your jigsaw puzzles—try them on “hard mode.”
Mr Tickle is a maniac who needs to be stopped. 🧩

+ A song about Christmas that probably many of us can relate to, if we’re honest.
+ If the original Home Alone was wince-inducing in terms of injuries, the sequel is even more excruciating.
+ The best crispy roast potatoes ever? I’ll let you be the judge

Read more

Last week, this space was all about OpenAI’s 12 days of shipmas. This week, the spotlight is on Google, which has been speeding toward the holiday by shipping or announcing its own flurry of products and updates. The combination of stuff here is pretty monumental, not just for a single company, but I think because it speaks to the power of the technology industry—even if it does trigger a personal desire that we could do more to harness that power and put it to more noble uses.

To start, last week Google Introduced Veo, a new video generation model, and Imagen 3, a new version of its image generation model. 

Then on Monday, Google announced a  breakthrough in quantum computing with its Willow chip. The company claims the new machine is capable of a “standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion (that is, 1025) years.” you may recall that MIT Technology Review covered some of the Willow work after researchers posted a paper preprint in August.   But this week marked the big media splash. It was a stunning update that had Silicon Valley abuzz. (Seriously, I have never gotten so many quantum computing pitches as in the past few days.)

Google followed this on Wednesday with even more gifts: a Gemini 2 release, a Project Astra update, and even more news about forthcoming agents called Mariner, an agent that can browse the web, and Jules, a coding assistant.  

First: Gemini 2. It’s impressive, with a lot of performance updates. But I have frankly grown a little inured by language-model performance updates to the point of apathy. Or at least near-apathy. I want to see them do something.

So for me, the cooler update was second on the list: Project Astra, which comes across like an AI from a futuristic movie set. Google first showed a demo of Astra back in May at its developer conference, and it was the talk of the show. But, since demos offer companies chances to show off products at their most polished, it can be hard to tell what’s real and what’s just staged for the audience. Still, when my colleague Will Douglas Heaven recently got to try it out himself, live and unscripted, it largely lived up to the hype. Although he found it glitchy, he noted that those glitches can be easily corrected. He called the experience “stunning” and said it could be generative AI’s killer app.

On top of all this, Will notes that this week Google DeepMind CEO (the company’s AI division) Demis Hassabis was in Sweden to receive his Nobel Prize. And what did you do with your week?

Making all this even more impressive, the advances represented in Willow, Gemini, Astra, and Veo are ones that just a few years ago many, many people would have said were not possible—or at least not in this timeframe. 

A popular knock on the tech industry is that it has a tendency to over-promise and under-deliver. The phone in your pocket gives the lie to this. So too do the rides I took in Waymo’s self-driving cars this week. (Both of which arrived faster than Uber’s estimated wait time. And honestly it’s not been that long since the mere ability to summon an Uber was cool!) And while quantum has a long way to go, the Willow announcement seems like an exceptional advance; if not a tipping point exactly, then at least a real waypoint on a long road. (For what it’s worth, I’m still not totally sold on chatbots. They do offer novel ways of interacting with computers, and have revolutionized information retrieval. But whether they are beneficial for humanity—especially given energy debts, the use of copyrighted material in their training data, their perhaps insurmountable tendency to hallucinate, etc.—is debatable, and certainly is being debated. But I’m pretty floored by this week’s announcements from Google, as well as OpenAI—full stop.)

And for all the necessary and overdue talk about reining in the power of Big Tech, the ability to hit significant new milestones on so many different fronts all at once is something that only a company with the resources of a Google (or Apple or Microsoft or Amazon or Meta or Baidu or whichever other behemoth) can do. 

All this said, I don’t want us to buy more gadgets or spend more time looking at our screens. I don’t want us to become more isolated physically, socializing with others only via our electronic devices. I don’t want us to fill the air with carbon or our soil with e-waste. I do not think these things should be the price we pay to drive progress forward. It’s indisputable that humanity would be better served if more of the tech industry was focused on ending poverty and hunger and disease and war.

Yet every once in a while, in the ever-rising tide of hype and nonsense that pumps out of Silicon Valley, epitomized by the AI gold rush of the past couple of years, there are moments that make me sit back in awe and amazement at what people can achieve, and in which I become hopeful about our ability to actually solve our larger problems—if only because we can solve so many other dumber, but incredibly complicated ones. This week was one of those times for me. 


Now read the rest of The Debrief

The News

• Robotaxi adoption is hitting a tipping point

• But also, GM is shutting down its Cruise robotaxi division.

• Here’s how to use OpenAI’s new video editing tool Sora.

• Bluesky has an impersonator problem.

• The AI hype machine is coming under government scrutiny.


The Chat

Every week, I talk to one of MIT Technology Review’s journalists to go behind the scenes of a story they are working on. This week, I hit up James O’Donnell, who covers AI and hardware, about his story on how the startup defense contractor Anduril is bringing AI to the battlefield.

Mat: James, you got a pretty up close look at something most people probably haven’t even thought about yet, which is how the future of AI-assisted warfare might look. What did you learn on that trip that you think will surprise people?

James: Two things stand out. One, I think people would be surprised by the gulf between how technology has developed for the last 15 years for consumers versus the military. For consumers, we’ve gotten phones, computers, smart TVs and other technologies that generally do a pretty good job of talking to each other and sharing our data, even though they’re made by dozens of different manufacturers. It’s called the “internet of things.” In the military, technology has developed in exactly the opposite way, and it’s putting them in a crisis. They have stealth aircraft all over the world, but communicating about a drone threat might be done with Powerpoints and a chat service reminiscent of AOL Instant Messenger.

The second is just how much the Pentagon is now looking to AI to change all of this. New initiatives have surged in the current AI boom. They are spending on training new AI models to better detect threats, autonomous fighter jets, and intelligence platforms that use AI to find pertinent information. What I saw at Anduril’s test site in California is also a key piece of that. Using AI to connect to and control lots of different pieces of hardware, like drones and cameras and submarines, from a single platform. The amount being invested in AI is much smaller than for aircraft carriers and jets, but it’s growing.

Mat: I was talking with a different startup defense contractor recently, who was talking to me about the difficulty of getting all these increasingly autonomous devices on the battlefield talking to each other in a coordinated way. Like Anduril, he was making the case that this has to be done at the edge, and that there is too much happening for human decision making to process. Do you think that’s true?  Why is that?

James: So many in the defense space have pointed to the war in Ukraine as a sign that warfare is changing. Drones are cheaper and more capable than they ever were in the wars in the Middle East. It’s why the Pentagon is spending $1 billion on the Replicator initiative to field thousands of cheap drones by 2025. It’s also looking to field more underwater drones as it plans for scenarios in which China may invade Taiwan.

Once you get these systems, though, the problem is having all the devices communicate with one another securely. You need to play Air Traffic Control at the same time that you’re pulling in satellite imagery and intelligence information, all in environments where communication links are vulnerable to attacks.

Mat: I guess I still have a mental image of a control room somewhere, like you might see in Dr. Strangelove or War Games (or Star Wars for that matter) with a handful of humans directing things. Are those days over?

James: I think a couple things will change. One, a single person in that control room will be responsible for a lot more than they are now. Rather than running just one camera or drone system manually, they’ll command software that does it for them, for lots of different devices. The idea that the defense tech sector is pushing is to take them out of the mundane tasks—rotating a camera around to look for threats—and instead put them in the driver’s seat for decisions that only humans, not machines, can make.

Mat: I know that critics of the industry push back on the idea of AI being empowered to make battlefield decisions, particularly when it comes to life and death, but it seems to me that we are increasingly creeping toward that and it seems perhaps inevitable. What’s your sense?

James: This is painting with broad strokes, but I think the debates about military AI fall along similar lines to what we see for autonomous vehicles. You have proponents saying that driving is not a thing humans are particularly good at, and when they make mistakes, it takes lives. Others might agree conceptually, but debate at what point it’s appropriate to fully adopt fallible self-driving technology in the real world. How much better does it have to be than humans?

In the military, the stakes are higher. There’s no question that AI is increasingly being used to sort through and surface information to decision-makers. It’s finding patterns in data, translating information, and identifying possible threats. Proponents are outspoken that that will make warfare more precise and reduce casualties. What critics are concerned about is how far across that decision-making pipeline AI is going, and how much there is human oversight.

I think where it leaves me is wanting transparency. When AI systems make mistakes, just like when human military commanders make mistakes, I think we deserve to know, and that transparency does not have to compromise national security. It took years for reporter Azmat Khan to piece together the mistakes made during drone strikes in the Middle East, because agencies were not forthcoming. That obfuscation absolutely cannot be the norm as we enter the age of military AI.

Mat: Finally, did you have a chance to hit an In-N-Out burger while you were in California?

James: Normally In-N-Out is a requisite stop for me in California, but ahead of my trip I heard lots of good things about the burgers at The Apple Pan in West LA, so I went there. To be honest, the fries were better, but for the burger I have to hand it to In-N-Out.


The Recommendation

A few weeks ago I suggested Ca7riel and Paco  Amoroso’s appearance on NPR Tiny Desk. At the risk of this space becoming a Tiny Desk stan account, I’m back again with another. I was completely floored by Doechii’s Tiny Desk appearance last week. It’s so full of talent and joy and style and power. I came away completely inspired and have basically had her music on repeat in Spotify ever since. If you are already a fan of her recorded music, you will love her live. If she’s new to you, well, you’re welcome. Go check it out. Oh, and don’t worry: I’m not planning to recommend Billie Eilish’s new Tiny Desk concert in next week’s newsletter. Mostly because I’m doing so now.

Read more
1 4 5 6 7 8 2,503