Ice Lounge Media

Ice Lounge Media

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The world’s first industrial-scale plant for green steel promises a cleaner future

As of 2023, nearly 2 billion metric tons of steel were being produced annually, enough to cover Manhattan in a layer more than 13 feet thick.

Making this metal produces a huge amount of carbon dioxide. Overall, steelmaking accounts for around 8% of the world’s carbon emissions—one of the largest industrial emitters and far more than such sources as aviation.

A handful of groups and companies are now making serious progress toward low- or zero-emission steel. Among them, the Swedish company Stegra stands out. The startup is currently building the first industrial-scale plant in the world to make green steel. But can it deliver on its promises? Read the full story.

—Douglas Main

Green steel is one of our 10 Breakthrough Technologies for 2025, MIT Technology Review’s annual list of tech to watch. Check out the rest of the list, and cast your vote for the honorary 11th breakthrough.

2025 is a critical year for climate tech

—Casey Crownhart

I love the fresh start that comes with a new year. And one thing adding a boost to my January is our newest list of 10 Breakthrough Technologies.

As I was looking over the finished list this week, I was struck by something: While there are some entries from other fields that are three or even five years away, all the climate items are either newly commercially available or just about to be. It’s certainly apt, because this year in particular seems to be bringing a new urgency to the fight against climate change. It’s time for these technologies to grow up and get out there. Read the full story.

This story is from The Spark, our weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.

A New York legislator wants to pick up the pieces of the dead California AI bill

The first Democrat in New York history with a computer science background wants to revive some of the ideas behind the failed California AI safety bill, SB 1047, with a new version in his state that would regulate the most advanced AI models.

Assembly member Alex Bores hopes his bill, currently an unpublished draft that MIT Technology Review has seen, will address many of the concerns that blocked SB 1047 from passing into law last year. Read the full story.

—Scott J Mulligan

MIT Technology Review Narrated: How covid conspiracy theories led to an alarming resurgence in AIDS denialism

Podcaster Joe Rogan, former presidential candidate Robert F. Kennedy Jr, and football quarterback Aaron Rodgers are all helping revive AIDS denialism—a false collection of theories arguing either that HIV doesn’t cause AIDS or that there’s no such thing as HIV at all. 

These ideas were initially promoted back in the 1980s and ’90s but fell out of favor, as more and more evidence stacked up against them, and as more people with HIV and AIDS started living longer lives thanks to effective new treatments. But then coronavirus arrived.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

Ask our journalists anything!

Do you have questions about emerging technologies? Well, we’ve got answers. MIT Technology Review’s science and tech journalists are hosting an AMA on Reddit tomorrow at 12 pm ET. Submit your questions now!

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Wildfires are sweeping through Los Angeles
Unusually strong winds and dry weather are accelerating multiple fires around the city. (Vox)
+ While California is no stranger to wildfires, these are particularly awful. (The Atlantic $)
+ Five people are known to have died, and thousands have lost their homes.(NY Mag $)
+ The quest to build wildfire-resistant homes. (MIT Technology Review)

2 AI can now predict how the genes inside a cell will drive its behavior
Scientists are hopeful it could usher in cell-specific therapies to fight genetic diseases. (WP $)
+ How AI can help us understand how cells work—and help cure diseases. (MIT Technology Review)

3 The Biden administration is planning a further chips crackdown
One of its final acts will be a push to prevent sales of chips to China and Russia. (Bloomberg $)
+ A group of tech representatives is begging the US government to reconsider. (Reuters)

4 Elon Musk’s DOGE division wants to slash $2 trillion in federal spending
But even he admits it’s a ridiculously ambitious goal. (WSJ $)
+ He reckons he might be able to cut half that amount. (NBC News)

5 Meta exempted its top advertisers from content moderation processes
It agreed to suppress standard testing for high spenders. (FT $)
+ Mark Zuckerberg appears to be following X’s playbook. (Wired $)
+ Maybe the two platforms aren’t so different after all. (The Atlantic $)

6 How one teenager embarked on a nationwide swatting spree
Alan Filion’s false shooting calls sent police into hundreds of schools across the US. (Wired $)

7 Blue Origin is limbering up to launch its new Glenn rocket
In the company’s very first flight. (New Scientist $)
+ If successful, the flight could prove Blue Origin’s worthiness as a SpaceX rival. (The Register)

8 Grok could be getting an ‘unhinged mode’
Whatever that means. (TechCrunch)
+ X’s chatbot was one of the biggest AI flops of 2024. (MIT Technology Review)

9 The secret to scaling quantum computing? Fiber optic cables  
Mixing quantum data with regular ole internet gigabits is one solution. (IEEE Spectrum)

10 This robot vacuum has limbs 🦾
All the better to clean your home with. (The Verge)
+ A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook? (MIT Technology Review)

Quote of the day

“I voted for Trump—I didn’t vote for Elon.”

—Preston Parra, chairman of the pro-Trump Conservative PAC, expresses his frustration with Elon Musk’s escalating involvement in US politics to the New York Times.

The big story

The weeds are winning

October 2024

Since the 1980s, more and more plants have evolved to become immune to herbicides. This threatens to decrease yields, and in extreme cases can wipe out whole fields.

At worst, it can even drive farmers out of business. It’s the agricultural equivalent of antibiotic resistance, and it keeps getting worse. Agriculture needs to embrace a diversity of weed control practices. But that’s much easier said than done. Read the full story.

—Douglas Main

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Andrew McCarthy has taken more than 90,000 pictures of the sun, which is pretty amazing.
+ Science’s most famous dogs? Yes please.
+ What better time to reorganize your kitchen cupboards than at the start of the new year?
+ The Robbie Williams biopic Better Man is completely bonkers—and a whole lot of fun.

Read more

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

I love the fresh start that comes with a new year. And one thing adding a boost to my January is our newest list of 10 Breakthrough Technologies.

In case you haven’t browsed this year’s list or a previous version, it features tech that’s either breaking into prominence or changing society. We typically recognize a range of items running from early-stage research to consumer technologies that folks are getting their hands on now.

As I was looking over the finished list this week, I was struck by something: While there are some entries from other fields that are three or even five years away, all the climate items are either newly commercially available or just about to be. It’s certainly apt, because this year in particular seems to be bringing a new urgency to the fight against climate change. We’re facing global political shifts and entering the second half of the decade. It’s time for these climate technologies to grow up and get out there.

Green steel

Steel is a crucial material for buildings and vehicles, and making it accounts for around 8% of global greenhouse-gas emissions. New manufacturing methods could be a huge part of cleaning up heavy industry, and they’re just on the cusp of breaking into the commercial market.

One company, called Stegra, is close to starting up the world’s first commercial green steel plant, which will make the metal using hydrogen from renewable sources. (You might know this company by its former name, H2 Green Steel, as we included it on our 2023 list of Climate Tech Companies to Watch.)

When I first started following Stegra a few years ago, its plans for a massive green steel plant felt incredibly far away. Now the company says it’s on track to produce steel at the factory by next year.

The biggest challenge in this space is money. Building new steel plants is expensive—Stegra has raised almost $7 billion. And the company’s product will be more expensive than conventional material, so it’ll need to find customers willing to pay up (so far, it has).

There are other efforts to clean up steel that will all face similar challenges around money, including another play in Sweden called Hybrit and startups like Boston Metal and Electra, which use different processes. Read more about green steel, and the potential obstacles it faces as we enter a new phase of commercialization, in this short blurb and in this longer feature about Stegra.

Cow burp remedies

Humans love burgers and steaks and milk and cheese, so we raise a whole bunch of cows. The problem is, these animals are among a group with a funky digestion process that produces a whole lot of methane (a powerful greenhouse gas). A growing number of companies are trying to develop remedies that help cut down on their methane emissions.

This is one of my favorite items on the list this year (and definitely my favorite illustration—at the very least, check out this blurb to enjoy the art).

There’s already a commercially available option right now: a feed additive called Bovaer from DSM-Firmenich that the company says can cut methane emissions by 30% in dairy cattle, and more in beef cattle. Startups are right behind with their own products, some of which could prove even better.

A key challenge all these companies face moving forward is acceptance: from regulatory agencies, farmers, and consumers. Some companies still need to go through lengthy and often expensive tests to show that their products are safe and effective. They’ll also need to persuade farmers to get on board. Some might also face misinformation that’s causing some consumers to protest these new additives.

Cleaner jet fuel

While planes crisscrossing the world are largely powered by fossil fuels, some alternatives are starting to make their appearance in aircraft.

New fuels, today mostly made from waste products like used cooking oil, can cut down emissions from air travel. In 2024, they made up about 0.5% of the fuel supply. But new policies could help these fuels break into new prominence, and new options are helping to widen their supply.

The key challenge here is scale. Global demand for jet fuel was about 100 billion gallons last year, so we’ll need a whole lot of volume from new producers to make a dent in aviation’s emissions.

To illustrate the scope, take LanzaJet’s new plant, opened in 2024. It’s the first commercial-scale facility that can make jet fuel with ethanol, and it has a capacity of about 9 million gallons annually. So we would need about 10,000 of those plants to meet global demand—a somewhat intimidating prospect. Read more in my write-up here.

From cow burps to jet fuel to green steel, there’s a huge range of tech that’s entering a new stage of deployment and will need to face new challenges in the next few years. We’ll be watching it all—thanks for coming along.


Now read the rest of The Spark

Related reading

Check out our full list of 2025’s Breakthrough Technologies here. There’s also a poll where you can vote for what you think the 11th item should be. I’m not trying to influence anyone’s vote, but I think methane-detecting satellites are pretty interesting—just saying … 

This package is part of our January/February print issue, which also includes stories on: 

A Polestar electric car prepares to park at an EV charging station on July 28, 2023 in Corte Madera, California.

JUSTIN SULLIVAN/GETTY

Another thing 

EVs are (mostly) set for solid growth in 2025, as my colleague James Temple covers in his newest story. Check it out for more about what’s next for electric vehicles, including what we might expect from a new administration in the US and how China is blowing everyone else out of the water. 

Keeping up with climate  

Winter used to be the one time of year that California didn’t have to worry about wildfires. A rapidly spreading fire in the southern part of the state is showing that’s not the case anymore. (Bloomberg)

Tesla’s annual sales decline for the first time in over a decade. Deliveries were lower than expected for the final quarter of the year. (Associated Press)

Meanwhile, in China, EVs are set to overtake traditional cars in sales years ahead of schedule. Forecasts suggest that EVs could account for 50% of car sales this year. (Financial Times)

KoBold metals raised $537 million in funding to use AI to mine copper. The funding pushes the startup’s valuation to $2.96 billion. (TechCrunch)
→ Read this profile of the company from 2021 for more. (MIT Technology Review)

We finally have the final rules for a tax credit designed to boost hydrogen in the US. The details matter here. (Heatmap)

China just approved the world’s most expensive infrastructure project. The hydroelectric dam could produce enough power for 300 million people, triple the capacity of the current biggest dam. (Economist)

In 1979, President Jimmy Carter installed 32 solar panels on the White House’s roof. Although they came down just a few years later, the panels lived multiple lives afterward. I really enjoyed reading about this small piece of Carter’s legacy in the wake of his passing. (New York Times)

An open pit mine in California is the only one in the US mining and extracting rare earth metals including neodymium and praseodymium. This is a fascinating look at the site. (IEEE Spectrum
→ I wrote about efforts to recycle rare earth metals, and what it means for the long-term future of metal supply, in a feature story last year. (MIT Technology Review)

Read more

The first Democrat in New York history with a computer science background wants to revive some of the ideas behind the failed California AI safety bill, SB 1047, with a new version in his state that would regulate the most advanced AI models. It’s called the RAISE Act, an acronym for “Responsible AI Safety and Education.”

Assemblymember Alex Bores hopes his bill, currently an unpublished draft—subject to change—that MIT Technology Review has seen, will address many of the concerns that blocked SB 1047 from passing into law.

SB 1047 was, at first, thought to be a fairly modest bill that would pass without much fanfare. In fact, it flew through the California statehouse with huge margins and received significant public support.

However, before it even landed on Governor Gavin Newsom’s desk for signature in September, it sparked an intense national fight. Google, Meta, and OpenAI came out against the bill, alongside top congressional Democrats like Nancy Pelosi and Zoe Lofgren. Even Hollywood celebrities got involved, with Jane Fonda and Mark Hamill expressing support for the bill. 

Ultimately, Newsom vetoed SB 1047, effectively killing regulation of so-called frontier AI models not just in California but, with the lack of laws on the national level, anywhere in the US, where the most powerful systems are developed.

Now Bores hopes to revive the battle. The main provisions in the RAISE Act include requiring AI companies to develop safety plans for the development and deployment of their models. 

The bill also provides protections for whistleblowers at AI companies. It forbids retaliation against an employee who shares information about an AI model in the belief that it may cause “critical harm”; such whistleblowers can report the information to the New York attorney general. One way the bill defines critical harm is the use of an AI model to create a chemical, biological, radiological, or nuclear weapon that results in the death or serious injury of 100 or more people. 

Alternatively, a critical harm could be a use of the AI model that results in 100 or more deaths or at least $1 billion in damages in an act with limited human oversight that if committed by a human would constitute a crime requiring intent, recklessness, or gross negligence.

The safety plans would ensure that a company has cybersecurity protections in place to prevent unauthorized access to a model. The plan would also require testing of models to assess risks before and after training, as well as detailed descriptions of procedures to assess the risks associated with post-training modifications. For example, some current AI systems have safeguards that can be easily and cheaply removed by a malicious actor. A safety plan would have to address how the company plans to mitigate these actions.

The safety plans would then be audited by a third party, like a nonprofit with technical expertise that currently tests AI models. And if violations are found, the bill empowers the attorney general of New York to issue fines and, if necessary, go to the courts to determine whether to halt unsafe development. 

A different flavour of bill

The safety plans and external audits were elements of SB 1047, but Bores aims to differentiate his bill from the California one. “We focused a lot on what the feedback was for 1047,” he says. “Parts of the criticism were in good faith and could make improvements. And so we’ve made a lot of changes.” 

The RAISE Act diverges from SB 1047 in a few ways. For one, SB 1047 would have created the Board of Frontier Models, tasked with approving updates to the definitions and regulations around these AI models, but the proposed act would not create a new government body. The New York bill also doesn’t create a public cloud computing cluster, which SB 1047 would have done. The cluster was intended to support projects to develop AI for the public good. 

The RAISE Act doesn’t have SB 1047’s requirement that companies be able to halt all operations of their model, a capability sometimes referred to as a “kill switch.” Some critics alleged that the shutdown provision of SB 1047 would harm open-source models, since developers can’t shut down a model someone else may now possess (even though SB 1047 had an exemption for open-source models).

The RAISE Act avoids the fight entirely. SB 1047 referred to an “advanced persistent threat” associated with bad actors trying to steal information during model training. The RAISE Act does away with that definition, sticking to addressing critical harms from covered models.

Focusing on the wrong issues?

Bores’ bill is very specific with its definitions in an effort to clearly delineate what this bill is and isn’t about. The RAISE Act doesn’t address some of the current risks from AI models, like bias, discrimination, and job displacement. Like SB 1047, it is very focused on catastrophic risks from frontier AI models. 

Some in the AI community believe this focus is misguided. “We’re broadly supportive of any efforts to hold large models accountable,” says Kate Brennan, associate director of the AI Now Institute, which conducts AI policy research.

“But defining critical harms only in terms of the most catastrophic harms from the most advanced models overlooks the material risks that AI poses, whether it’s workers subject to surveillance mechanisms, prone to workplace injuries because of algorithmically managed speed rates, climate impacts of large-scale AI systems, data centers exerting massive pressure on local power grids, or data center construction sidestepping key environmental protections,” she says.

Bores has worked on other bills addressing current harms posed by AI systems, like discrimination and lack of transparency. That said, Bores is clear that this new bill is aimed at mitigating catastrophic risks from more advanced models. “We’re not talking about any model that exists right now,” he says. “We are talking about truly frontier models, those on the edge of what we can build and what we understand, and there is risk in that.” 

The bill would cover only models that pass a certain threshold for how many computations their training required, typically measured in FLOPs (floating-point operations). In the bill, a covered model is one that requires more than 1026 FLOPs in its training and costs over $100 million. For reference, GPT-4 is estimated to have required 1025 FLOPs. 

This approach may draw scrutiny from industry forces. “While we can’t comment specifically on legislation that isn’t public yet, we believe effective regulation should focus on specific applications rather than broad model categories,” says a spokesperson at Hugging Face, a company that opposed SB 1047.

Early days

The bill is in its nascent stages, so it’s subject to many edits in the future, and no opposition has yet formed. There may already be lessons to be learned from the battle over SB 1047, however. “There’s significant disagreement in the space, but I think debate around future legislation would benefit from more clarity around the severity, the likelihood, and the imminence of harms,” says Scott Kohler, a scholar at the Carnegie Endowment for International Peace, who tracked the development of SB 1047. 

When asked about the idea of mandated safety plans for AI companies, assemblymember Edward Ra, a Republican who hasn’t yet seen a draft of the new bill yet, said: “I don’t have any general problem with the idea of doing that. We expect businesses to be good corporate citizens, but sometimes you do have to put some of that into writing.” 

Ra and Bores co chair the New York Future Caucus, which aims to bring together lawmakers 45 and under to tackle pressing issues that affect future generations.

Scott Wiener, a California state senator who sponsored SB 1047, is happy to see that his initial bill, even though it failed, is inspiring further legislation and discourse. “The bill triggered a conversation about whether we should just trust the AI labs to make good decisions, which some will, but we know from past experience, some won’t make good decisions, and that’s why a level of basic regulation for incredibly powerful technology is important,” he says.

He has his own plans to reignite the fight: “We’re not done in California. There will be continued work in California, including for next year. I’m optimistic that California is gonna be able to get some good things done.”

And some believe the RAISE Act will highlight a notable contradiction: Many of the industry’s players insist that they want regulation, but when any regulation is proposed, they fight against it. “SB 1047 became a referendum on whether AI should be regulated at all,” says Brennan. “There are a lot of things we saw with 1047 that we can expect to see replay in New York if this bill is introduced. We should be prepared to see a massive lobbying reaction that industry is going to bring to even the lightest-touch regulation.”

Wiener and Bores both wish to see regulation at a national level, but in the absence of such legislation, they’ve taken the battle upon themselves. At first it may seem odd for states to take up such important reforms, but California houses the headquarters of the top AI companies, and New York, which has the third-largest state economy in the US, is home to offices for OpenAI and other AI companies. The two states may be well positioned to lead the conversation around regulation. 

“There is uncertainty at the direction of federal policy with the transition upcoming and around the role of Congress,” says Kohler. “It is likely that states will continue to step up in this area.”

Wiener’s advice for New York legislators entering the arena of AI regulation? “Buckle up and get ready.”

Read more
1 10 11 12 13 14 2,528