On Dec. 16, US spot and derivative Bitcoin ETFs collectively broke $129 billion in net assets, surpassing gold ETFs for the first time.
Two Chinese citizens and a UAE trading company have been sanctioned by the United States for their alleged roles in money laundering for North Korea.
Recorded on December 17, 2024
The Worst Technology Failures of 2024
Speakers: Antonio Regalado, senior editor for biomedicine, and Niall Firth, executive editor.
MIT Technology Review publishes an annual list of the worst technologies of the year. This year, The Worst Technology Failures of 2024 list was unveiled live by our editors. Hear from MIT Technology Review executive editor Niall Firth and senior editor for biomedicine Antonio Regalado as they discuss each of the 8 items on this list.
Related Coverage
Towana Looney, a 53-year-old woman from Alabama, has become the third living person to receive a kidney transplant from a gene-edited pig.
Looney, who donated one of her kidneys to her mother back in 1999, developed kidney failure several years later following a pregnancy complication that caused high blood pressure. She started dialysis treatment in December of 2016 and was put on a waiting list for a kidney transplant soon after, in early 2017.
But it was difficult to find a match. So Looney’s doctors recommended the experimental pig organ as an alternative. After eight years on the waiting list, Looney was authorized to receive the kidney under the US Food and Drug Administration’s expanded access program, which allows people with serious or life-threatening conditions to try experimental treatments.
The pig in question was developed by Revivicor, a United Therapeutics company. The company’s technique involves making 10 gene edits to a pig cell. The edits are made to prevent too much organ growth, curb inflammation, and, importantly, stop the recipient’s immune system from rejecting the organ. The edited pig cell is then placed into a pig egg cell that has had its nucleus removed, and the egg is transferred to the uterus of a sow, which eventually gives birth to a gene-edited piglet.
In theory, once the piglet has grown, its organs can be used for human transplantation. Pig organs are similar in size to human ones, after all. A few years ago, David Bennett Sr. became the first person to receive a heart transplant from such a pig. He died two months after the operation, and the heart was later found to have been infected with a pig virus.
Richard Slayman was the first person to get a gene-edited pig kidney, which he received in early 2024. He died two months after his surgery, although the hospital treating him said in a statement that it had “no indication that it was the result of his recent transplant.” In April, Lisa Pisano was reported to be the second person to receive such an organ. Pisano also received a heart pump alongside her kidney transplant. Her kidney failed because of an inadequate blood supply and was removed the following month. She died in July.
Looney received her pig kidney during a seven-hour operation that took place at NYU Langone Health in New York City on November 25. The surgery was led by Jayme Locke of the US Health Resources & Services Administration and Robert Montgomery of the NYU Langone Transplant Institute.
Looney was discharged from the hospital 11 days after her surgery, to an apartment in New York City. She’ll stay in New York for another three months so she can check in with doctors at the hospital for evaluations.
“It’s a blessing,” Looney said in a statement. “I feel like I’ve been given another chance at life. I cannot wait to be able to travel again and spend more quality time with my family and grandchildren.”
Looney’s doctors are hopeful that her kidney will last longer than those of her predecessors. For a start, Looney was in better health to begin with—she had chronic kidney disease and required dialysis, but unlike previous recipients, she was not close to death, Montgomery said in a briefing. He and his colleagues plan to start clinical trials within the next year.
There is a huge unmet need for organs. In the US alone, there more than 100,000 people are waiting for one, and 17 people on the waiting list die every day. Researchers hope that gene-edited animals might provide a new source of organs for such individuals.
Revivicor isn’t the only company working on this. Rival company eGenesis, which has a different approach to gene editing, has used CRISPR to create pigs with around 70 gene edits.
“Transplant is one of the few therapies that can cure a complex disease overnight, yet there are too few organs to provide a cure for all in need,” Locke said in a statement. “The thought that we may now have a solution to the organ shortage crisis for others who have languished on our waiting lists invokes the most welcome of feelings: pure joy!”
Today, Looney is the only person living with a pig organ. “I am full of energy. I got an appetite I’ve never had in eight years,” she said at a briefing. “I can put my hand on this kidney and feel it buzzing.”
This story has been updated with additional information after a press briefing.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
The 8 worst technology failures of 2024
They say you learn more from failure than success. If so, this is the story for you: MIT Technology Review’s annual roll call of the biggest flops, flimflams, and fiascos in all domains of technology.
Some of the foul-ups were funny, like the “woke” AI which got Google in trouble after it drew Black Nazis. Some caused lawsuits, like a computer error by CrowdStrike that left thousands of Delta passengers stranded. And we also reaped failures among startups that raced to expand from 2020 to 2022, a period of ultra-low interest rates. Check out what made our list of this year’s biggest technology failures.
—Antonio Regalado
Antonio will be discussing this year’s worst failures with our executive editor Niall Firth in a subscriber-exclusive online Roundtable event today at 12.00 ET. Register here to make sure you don’t miss outf you haven’t already, subscribe!
AI’s search for more energy is growing more urgent
If you drove by one of the 2,990 data centers in the United States, you’d probably think little more than “Huh, that’s a boring-looking building.” You might not even notice it at all. However, these facilities underpin our entire digital world, and they are responsible for tons of greenhouse-gas emissions. New research shows just how much those emissions have skyrocketed during the AI boom.
That leaves a big problem for the world’s leading AI companies, which are caught between pressure to meet their own sustainability goals and the relentless competition in AI that’s leading them to build bigger models requiring tons of energy. And the trend toward ever more energy-intensive new AI models will only send those numbers higher. Read the full story.
—James O’Donnell
This story originally appeared in The Algorithm, our weekly newsletter on AI. Sign up to receive it in your inbox every Monday.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 TikTok has asked the US Supreme Court for a lifeline
It’s asked lawmakers to intervene before the proposed ban kicks in on January 19. (WP $)
+ TikTok CEO Shou Zi Chew reportedly met with Donald Trump yesterday. (NBC News)
+ Trump will take office the following day, on January 20. (WSJ $)
+ Meanwhile, the EU is investigating TikTok’s role in Romania’s election. (Politico)
2 Waymo’s autonomous cars are heading to Tokyo
In the first overseas venture for the firm’s vehicles. (The Verge)
+ The cars will require human safety drivers initially. (CNBC)
+ What’s next for robotaxis in 2024. (MIT Technology Review)
3 China’s tech workers are still keen to work in the US
But securing the right to work there is much tougher than it used to be. (Rest of World)
4 Digital license plates are vulnerable to hacking
And they’re already legal to buy in multiple US states. (Wired $)
5 We’re all slaves to the algorithms
From the mundane (Spotify) to the essential (housing applications.) (The Atlantic $)
+ How a group of tenants took on screening systems—and won. (The Guardian)
+ The coming war on the hidden algorithms that trap people in poverty. (MIT Technology Review)
6 How to build an undetectable submarine
The race is on to stay hidden from the competition. (IEEE Spectrum)
+ How underwater drones could shape a potential Taiwan-China conflict. (MIT Technology Review)
7 How Empower became a viable rival to Uber
Its refusal to cooperate with authorities is straight out of Uber’s early playbook. (NYT $)
8 Even airlines are using AirTags to find lost luggage
Which begs the question: how were they looking for missing bags before?(Bloomberg $)
+ Here’s how to keep tabs on your suitcase as you travel. (Forbes $)
9 You’re reading your blood pressure all wrong
Keep your feet flat on the floor and ditch your phone, for a start. (WSJ $)
10 The rise and rise of the group chat
Expressing yourself publicly on social media is so last year. (Insider $)
+ How to fix the internet. (MIT Technology Review)
Quote of the day
“Where are the adults in the room?”
—Francesca Marano, a long-time contributor to WordPress, lambasts the platform’s decision to require users to check a box reading “Pineapple is delicious on pizza” to log in, 404 Media reports.
The big story
Responsible AI has a burnout problem
October 2022
Margaret Mitchell had been working at Google for two years before she realized she needed a break. Only after she spoke with a therapist did she understand the problem: she was burnt out.
Mitchell, who now works as chief ethics scientist at the AI startup Hugging Face, is far from alone in her experience. Burnout is becoming increasingly common in responsible AI teams.
All the practitioners MIT Technology Review interviewed spoke enthusiastically about their work: it is fueled by passion, a sense of urgency, and the satisfaction of building solutions for real problems. But that sense of mission can be overwhelming without the right support. Read the full story.
—Melissa Heikkilä
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ This timelapse of a pine tree growing from a tiny pinecone is pretty special
+ Shaboozey’s A Bar Song (Tipsy) is one of 2024’s biggest hits. But why has it struck such a chord?
+ All hail London’s campest Christmas tree!
+ Stay vigilant, Oregon’s googly eye bandit has struck again
They say you learn more from failure than success. If so, this is the story for you: MIT Technology Review’s annual roll call of the biggest flops, flimflams, and fiascos in all domains of technology.
Some of the foul-ups were funny, like the “woke” AI which got Google in trouble after it drew Black Nazis. Some caused lawsuits, like a computer error by CrowdStrike that left thousands of Delta passengers stranded. We also reaped failures among startups that raced to expand from 2020 to 2022, a period of ultra-low interest rates. But then the economic winds shifted. Money wasn’t free anymore. The result? Bankruptcy and dissolution for companies whose ambitious technological projects, from vertical farms to carbon credits, hadn’t yet turned a profit and might never do so.
Read on.
Woke AI blunder
People worry about bias creeping into AI. But what if you add bias on purpose? Thanks to Google, we know where that leads: Black Vikings and female popes.
Google’s Gemini AI image feature, launched last February, had been tuned to zealously showcase diversity, damn the history books. Ask Google for a picture of German soldiers from World War II, and it would create a Benetton ad in Wehrmacht uniforms.
Critics pounced and Google beat an embarrassed retreat. It paused Gemini’s ability to draw people and agreed its well-intentioned effort to be inclusive had “missed the mark.”
The free version of Gemini still won’t create images of people. But paid versions will. When we asked for an image of 12 CEOs of public biotech companies, the software produced a photographic-quality image of middle-aged white men. Less than ideal. But closer to the truth.
More: Is Google’s Gemini chatbot woke by accident, or by design? (The Economist), Gemini image generation got it wrong. We’ll do better. (Google)
Boeing Starliner
Boeing, we have a problem. And it’s your long-delayed reusable spaceship, the Starliner, which stranded NASA astronauts Sunita “Suni” Williams and Barry “Butch” Wilmore on the International Space Station.
The June mission was meant to be a quick eight-day round trip to test Starliner before it embarked on longer missions. But, plagued by helium leaks and thruster problems, it had to come back empty.
Now Butch and Suni won’t return to Earth until 2025, when a craft from Boeing competitor SpaceX is scheduled to bring them home.
Credit Boeing and NASA with putting safety first. But this wasn’t Boeing’s only malfunction during 2024. The company began the year with a door blowing off one of its planes midflight, faced a worker strike, agreed to a major fine for misleading the government about the safety of its 737 Max airplane (which made our 2019 list of worst technologies), and saw its CEO step down in March.
After the Starliner fiasco, Boeing fired the chief of its space and defense unit. “At this critical juncture, our priority is to restore the trust of our customers and meet the high standards they expect of us to enable their critical missions around the world,” Boeing’s new CEO, Kelly Ortberg, said in a memo.
More: Boeing’s beleaguered space capsule is heading back to Earth without two NASA astronauts (NY Post), Boeing’s space and defense chief exits in new CEO’s first executive move (Reuters), CST-100 Starliner (Boeing)
CrowdStrike outage
The motto of the cybersecurity company CrowdStrike is “We stop breaches.” And it’s true: No one can breach your computer if you can’t turn it on.
That’s exactly what happened to many people on July 19, when thousands of Windows computers at airlines, TV stations, and hospitals started displaying the “blue screen of death.”
The cause wasn’t hackers or ransomware. Instead, those computers were stuck in a boot loop because of a bad update shipped by CrowdStrike itself. CEO George Kurtz jumped on X to say the “issue” had been identified as a “defect” in a single computer file.
So who is liable? CrowdStrike customer Delta Airlines, which canceled 7,000 flights, is suing for $500 million. It alleges that the security firm caused a “global catastrophe” when it took “uncertified and untested shortcuts.”
CrowdStrike countersued. It says Delta’s management is to blame for its troubles and that the airline is due little more than a refund.
More: “Crowdstrike is working with customers”(George Kurtz), How to fix a Windows PC affected by the global outage (MIT Technology Review), Delta Sues CrowdStrike Over July Operations Meltdown (WSJ)
Vertical farms
Grow lettuce in buildings using robots, hydroponics, and LED lights. That’s what Bowery, a “vertical farming” startup, raised over $700 million to do. But in November, Bowery went bust, making it the biggest startup failure of the year, according to the business analytics firm CB Insights.
Bowery claimed that vertical farms were “100 times more productive” per square foot than traditional farms, since racks of plants could be stacked 40 feet high. In reality, the company’s lettuce was more expensive, and when a stubborn plant infection spread through its East Coast facilities, Bowery had trouble delivering the green stuff at any price.
More: How a leaf-eating pathogen, failed deals brought down Bowery Farming (Pitchbook), Vertical farming “unicorn” Bowery to shut down (Axios)
Exploding pagers
They beeped, and then they blew up. Across Lebanon, fingers and faces were shredded in what was called Israel’s “surprise opening blow in an all-out war to try to cripple Hezbollah.”
The deadly attack was diabolically clever. Israel set up shell companies that sold thousands of pagers packed with explosives to the Islamic faction, which was already worried that its phones were being spied on.
A coup for Israel’s spies. But was it a war crime? A 1996 treaty prohibits intentionally manufacturing “apparently harmless objects” designed to explode. The New York Times says nine-year-old Fatima Abdullah died when her father’s booby-trapped beeper chimed and she raced to take it to him.
More: Israel conducted Lebanon pager attack… (Axios), A 9-Year-Old Girl Killed in Pager Attack Is Mourned in Lebanon (New York Times), Did Israel break international law? (Middle East Eye)
23andMe
The company that pioneered direct-to-consumer gene testing is sinking fast. Its stock price is going toward zero, and a plan to create valuable drugs is kaput after that team got pink slips this November.
23andMe always had a celebrity aura, bathing in good press. Now, though, the press is all bad. It’s a troubled company in the grip of a controlling founder, Anne Wojcicki, after its independent directors resigned en masse this September. Customers are starting to worry about what’s going to happen to their DNA data if 23andMe goes under.
23andMe says it created “the world’s largest crowdsourced platform for genetic research.” That’s true. It just never figured out how to turn a profit.
More: 23andMe’s fall from $6 billion to nearly $0 (Wall Street Journal), How to…delete your 23andMe data (MIT Technology Review), 23andMe Financial Report, November 2024 (23andMe)
AI slop
Slop is the scraps and leftovers that pigs eat. “AI slop” is what you and I are increasingly consuming online now that people are flooding the internet with computer-generated text and pictures.
AI slop is “dubious,” says the New York Times, and “dadaist,” according to Wired. It’s frequently weird, like Shrimp Jesus (don’t ask if you don’t know), or deceptive, like the picture of a shivering girl in a rowboat, supposedly showing the US government’s poor response to Hurricane Helene.
AI slop is often entertaining. AI slop is usually a waste of your time. AI slop is not fact-checked. AI slop exists mostly to get clicks. AI slop is that blue-check account on X posting 10-part threads on how great AI is—threads that were written by AI.
Most of all, AI slop is very, very common. This year, researchers claimed that about half the long posts on LinkedIn and Medium were partly AI-generated.
More: First came ‘Spam.’ Now, With A.I., We’ve got ‘Slop’ (New York Times), AI Slop Is Flooding Medium (Wired)
Voluntary carbon markets
Your business creates emissions that contribute to global warming. So why not pay to have some trees planted or buy a more efficient cookstove for someone in Central America? Then you could reach net-zero emissions and help save the planet.
Neat idea, but good intentions aren’t enough. This year the carbon marketplace Nori shut down, and so did Running Tide, a firm trying to sink carbon into the ocean. “The problem is the voluntary carbon market is voluntary,” Running Tide’s CEO wrote in a farewell post, citing a lack of demand.
While companies like to blame low demand, it’s not the only issue. Sketchy technology, questionable credits, and make-believe offsets have created a credibility problem in carbon markets. In October, US prosecutors charged two men in a $100 million scheme involving the sale of nonexistent emissions savings.
More: The growing signs of trouble for global carbon markets (MIT Technology Review), Running Tide’s ill-fated adventure in ocean carbon removal (Canary Media), Ex-carbon offsetting boss charged in New York with multimillion-dollar fraud (The Guardian)
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
If you drove by one of the 2,990 data centers in the United States, you’d probably think little more than “Huh, that’s a boring-looking building.” You might not even notice it at all. However, these facilities underpin our entire digital world, and they are responsible for tons of greenhouse-gas emissions. New research shows just how much those emissions have skyrocketed during the AI boom.
Since 2018, carbon emissions from data centers in the US have tripled, according to new research led by a team at the Harvard T.H. Chan School of Public Health. That puts data centers slightly below domestic commercial airlines as a source of this pollution.
That leaves a big problem for the world’s leading AI companies, which are caught between pressure to meet their own sustainability goals and the relentless competition in AI that’s leading them to build bigger models requiring tons of energy. The trend toward ever more energy-intensive new AI models, including video generators like OpenAI’s Sora, will only send those numbers higher.
A growing coalition of companies is looking toward nuclear energy as a way to power artificial intelligence. Meta announced on December 3 it was looking for nuclear partners, and Microsoft is working to restart the Three Mile Island nuclear plant by 2028. Amazon signed nuclear agreements in October.
However, nuclear plants take ages to come online. And though public support has increased in recent years, and president-elect Donald Trump has signaled support, only a slight majority of Americans say they favor more nuclear plants to generate electricity.
Though OpenAI CEO Sam Altman pitched the White House in September on an unprecedented effort to build more data centers, the AI industry is looking far beyond the United States. Countries in Southeast Asia, like Malaysia, Indonesia, Thailand, and Vietnam, are all courting AI companies, hoping to be their new data center hubs.
In the meantime, AI companies will continue to use up power from their current sources, which are far from renewable. Since so many data centers are located in coal-producing regions, like Virginia, the “carbon intensity” of the energy they use is 48% higher than the national average. The researchers found that 95% of data centers in the US are built in places with sources of electricity that are dirtier than the national average. Read more about the new research here.
Deeper Learning
We saw a demo of the new AI system powering Anduril’s vision for war
We’re living through the first drone wars, but AI is poised to change the future of warfare even more drastically. I saw that firsthand during a visit to a test site in Southern California run by Anduril, the maker of AI-powered drones, autonomous submarines, and missiles. Anduril has built a way for the military to command much of its hardware—from drones to radars to unmanned fighter jets—from a single computer screen.
Why it matters: Anduril, other companies in defense tech, and growing numbers of people within the Pentagon itself are increasingly adopting a new worldview: A future “great power” conflict—military jargon for a global war involving multiple countries—will not be won by the entity with the most advanced drones or firepower, or even the cheapest firepower. It will be won by whoever can sort through and share information the fastest. The Pentagon is betting lots of energy and money that AI—despite its flaws and risks—will be what puts the US and its allies ahead in that fight. Read more here.
Bits and Bytes
Bluesky has an impersonator problem
The platform’s rise has brought with it a surge of crypto scammers, as my colleague Melissa Heikkilä experienced firsthand. (MIT Technology Review)
Tech’s elite make large donations to Trump ahead of his inauguration
Leaders in Big Tech, who have been lambasted by Donald Trump, have made sizable donations to his inauguration committee. (The Washington Post)
Inside the premiere of the first commercially streaming AI-generated movies
The films, according to writer Jason Koebler, showed the telltale flaws of AI-generated video: dead eyes, vacant expressions, unnatural movements, and a reliance on voice-overs, since dialogue doesn’t work well. The company behind the films is confident viewers will stomach them anyway. (404 Media)
Meta asked California’s attorney general to stop OpenAI from becoming for-profit
Meta now joins Elon Musk in alleging that OpenAI has improperly enjoyed the benefits of nonprofit status while developing its technology. (Wall Street Journal)
How Silicon Valley is disrupting democracy
Two books explore the price we’ve paid for handing over unprecedented power to Big Tech—and explain why it’s imperative we start taking it back. (MIT Technology Review)
A highly requested Threads feature is coming.
X continues to develop its own AI chatbot tool.
The fine relates to unauthorised access of users’ personal info on Facebook.