This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
Geoffrey Hinton, AI pioneer and figurehead of doomerism, wins Nobel Prize
Geoffrey Hinton, a computer scientist whose pioneering work on deep learning in the 1980s and 90s underpins all of the most powerful AI models in the world today, has been awarded the 2024 Nobel Prize for Physics by the Royal Swedish Academy of Sciences.
Hinton shares the award with fellow computer scientist John Hopfield, who invented a type of pattern-matching neural network that could store and reconstruct data. Hinton built on this technology, known as Hopfield networks, to develop back propagation, an algorithm that lets neural networks learn.
But since May 2023, when MIT Technology Review helped break the news that Hinton was now scared of the technology that he had helped bring about, the 76-year-old scientist has become much better known as a figurehead for doomerism—the mindset that takes a very real risk that near-future AI could produce catastrophic results, up to and including human extinction. Read the full story.
—Will Douglas Heaven
Forget chat. AI that can hear, see and click is already here
Chatting with an AI chatbot is so 2022. The latest hot AI toys take advantage of multimodal models, which can handle several things at the same time, such as images, audio, and text.
Multimodal generative content has also become markedly better in a very short time, and the way we interact with AI systems is also changing, becoming less reliant on text. What unites these features is a more interactive, customizable interface and the ability to apply AI tools to lots of different types of source material. But we’ve yet to see a killer app. Read the full story.
—Melissa Heikkilä
This story is from The Algorithm, our weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.
Why artificial intelligence and clean energy need each other
—Michael Kearney is a general partner at Engine Ventures, a firm that invests in startups commercializing breakthrough science and engineering. Lisa Hansmann is a principal at Engine Ventures and previously served as special assistant to the president in the Biden administration, working on economic policy and implementation.
We are in the early stages of a geopolitical competition for the future of artificial intelligence. The winners will dominate the global economy in the 21st century.
But what’s been too often left out of the conversation is that AI’s huge demand for concentrated and consistent amounts of power represents a chance to scale the next generation of clean energy technologies.
If we ignore this opportunity, the United States will find itself disadvantaged in the race for the future of both AI and energy production, ceding global economic leadership to China. Read the full story.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Florida is bracing itself for Hurricane Milton
Just days after Hurricane Helene devastated the state, residents have been ordered to evacuate. (The Guardian)
+ Experts are stunned at how quickly the storm intensified. (FT $)
+ It grew from a tropical storm to a Category 5 hurricane within just a day. (Vox)
2 Google has been ordered to open its Play Store to rivals
A judge has ruled that Google must allow developers to add their own app stores to its Android system for three years. (NYT $)
+ Google isn’t allowed to strike exclusive deals for its Play Store, either. (WSJ $)
+ It’s a major antitrust victory for Epic Games. (WP $)
3 FTX customers are going to get their money back
A US judge has greenlit a plan to repay them billions of dollars. (Wired $)
4 Greenland has changed dramatically in the past few decades
Its future depends on how we react to global warming. (New Yorker $)
+ Many dams across the world aren’t fit for purpose any more. (Undark Magazine)
+ Sorry, AI won’t “fix” climate change. (MIT Technology Review)
5 Work is drying up for freelance gig workers
Fewer people are hiring them for small tasks in the wake of covid. (FT $)
6 What it’s like to build a data center in Malaysia
The region is home to one of the world’s biggest AI construction projects. (WSJ $)
+ Meanwhile, Ireland is struggling to do the same. (FT $)
7 A European Space Agency probe is investigating an asteroid smash
It’s going to assess how a 2022 NASA mission affected it. (IEEE Spectrum)
+ Watch the moment NASA’s DART spacecraft crashed into an asteroid. (MIT Technology Review)
8 Inside the world’s first humanoid robot factory
Agility Robotics is building major production lines to assemble its Digit machines. (Bloomberg $)
9 AI-generated pro-North Korea propaganda is floating around TikTok
Bizarrely, they appear to be linked to ads for supplements. (404 Media)
10 What lies beneath the moon’s surface?
A soft, gooey layer, apparently. (Vice)
+ What’s next for the moon. (MIT Technology Review)
Quote of the day
“You’re going to end up paying something to make the world right after having been found to be a monopolist.”
—US District Judge James Donato warns Google’s lawyers of tough times ahead after he ordered the company to overhaul its mobile app business, Reuters reports.
The big story
Large language models can do jaw-dropping things. But nobody knows exactly why.
Two years ago, Yuri Burda and Harri Edwards, researchers at OpenAI, were trying to find out what it would take to get a large language model to do basic arithmetic. At first, things didn’t go too well. The models memorized the sums they saw but failed to solve new ones.
By accident, Burda and Edwards left some of their experiments running for days rather than hours. The models were shown the example sums over and over again, and eventually they learned to add two numbers—it had just taken a lot more time than anybody thought it should.
In certain cases, models could seemingly fail to learn a task and then all of a sudden just get it, as if a lightbulb had switched on, a behavior the researchers called grokking. Grokking is just one of several odd phenomena that have AI researchers scratching their heads. The largest models, and large language models in particular, seem to behave in ways textbook math says they shouldn’t.
This highlights a remarkable fact about deep learning, the fundamental technology behind today’s AI boom: for all its runaway success, nobody knows exactly how—or why—it works. Read the full story.
—Will Douglas Heaven
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)
+ The sausage dogs are coming!
+ Ever wondered what the worst-rated film of all time is? Wonder no more.
+ How to make downsizing more rewarding, less harrowing.
+ These secluded hikes look fabulous—just don’t forget your map.