Ice Lounge Media

Ice Lounge Media

Food delivery startup Wonder is acquiring media company Tastemade for around $90 million, according to The Wall Street Journal. Founded in 2012, Tastemade produces food, travel, and home videos and operates several free, ad-supported streaming television (FAST) channels. The acquisition gives Wonder access to a content studio, production company, and advertising business. Wonder will leverage […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Read more

Google DeepMind has released a new model, Gemini Robotics, that combines its best large language model with robotics. Plugging in the LLM seems to give robots the ability to be more dexterous, work from natural-language commands, and generalize across tasks. All three are things that robots have struggled to do until now.

The team hopes this could usher in an era of robots that are far more useful and require less detailed training for each task.

“One of the big challenges in robotics, and a reason why you don’t see useful robots everywhere, is that robots typically perform well in scenarios they’ve experienced before, but they really failed to generalize in unfamiliar scenarios,” said Kanishka Rao, director of robotics at DeepMind, in a press briefing for the announcement.

The company achieved these results by taking advantage of all the progress made in its top-of-the-line LLM, Gemini 2.0. Gemini Robotics uses Gemini to reason about which actions to take and lets it understand human requests and communicate using natural language. The model is also able to generalize across many different robot types. 

Incorporating LLMs into robotics is part of a growing trend, and this may be the most impressive example yet. “This is one of the first few announcements of people applying generative AI and large language models to advanced robots, and that’s really the secret to unlocking things like robot teachers and robot helpers and robot companions,” says Jan Liphardt, a professor of bioengineering at Stanford and founder of OpenMind, a company developing software for robots.

Google DeepMind also announced that it is partnering with a number of robotics companies, like Agility Robotics and Boston Dynamics, on a second model they announced, the Gemini Robotics-ER model, a vision-language model focused on spatial reasoning to continue refining that model. “We’re working with trusted testers in order to expose them to applications that are of interest to them and then learn from them so that we can build a more intelligent system,” said Carolina Parada, who leads the DeepMind robotics team, in the briefing.

Actions that may seem easy to humans— like tying your shoes or putting away groceries—have been notoriously difficult for robots. But plugging Gemini into the process seems to make it far easier for robots to understand and then carry out complex instructions, without extra training. 

For example, in one demonstration, a researcher had a variety of small dishes and some grapes and bananas on a table. Two robot arms hovered above, awaiting instructions. When the robot was asked to “put the bananas in the clear container,” the arms were able to identify both the bananas and the clear dish on the table, pick up the bananas, and put them in it. This worked even when the container was moved around the table.

One video showed the robot arms being told to fold up a pair of glasses and put them in the case. “Okay, I will put them in the case,” it responded. Then it did so. Another video showed it carefully folding paper into an origami fox. Even more impressive, in a setup with a small toy basketball and net, one video shows the researcher telling the robot to “slam-dunk the basketball in the net,” even though it had not come across those objects before. Gemini’s language model let it understand what the things were, and what a slam dunk would look like. It was able to pick up the ball and drop it through the net. 

GEMINI ROBOTICS

“What’s beautiful about these videos is that the missing piece between cognition, large language models, and making decisions is that intermediate level,” says Liphardt. “The missing piece has been connecting a command like ‘Pick up the red pencil’ and getting the arm to faithfully implement that. Looking at this, we’ll immediately start using it when it comes out.”

Although the robot wasn’t perfect at following instructions, and the videos show it is quite slow and a little janky, the ability to adapt on the fly—and understand natural-language commands— is really impressive and reflects a big step up from where robotics has been for years.

“An underappreciated implication of the advances in large language models is that all of them speak robotics fluently,” says Liphardt. “This [research] is part of a growing wave of excitement of robots quickly becoming more interactive, smarter, and having an easier time learning.”

Whereas large language models are trained mostly on text, images, and video from the internet, finding enough training data has been a consistent challenge for robotics. Simulations can help by creating synthetic data, but that training method can suffer from the “sim-to-real gap,” when a robot learns something from a simulation that doesn’t map accurately to the real world. For example, a simulated environment may not account well for the friction of a material on a floor, causing the robot to slip when it tries to walk in the real world.

Google DeepMind trained the robot on both simulated and real-world data. Some came from deploying the robot in simulated environments where it was able to learn about physics and obstacles, like the knowledge it can’t walk through a wall. Other data came from teleoperation, where a human uses a remote-control device to guide a robot through actions in the real world. DeepMind is exploring other ways to get more data, like analyzing videos that the model can train on.

The team also tested the robots on a new benchmark—a list of scenarios from what DeepMind calls the ASIMOV data set, in which a robot must determine whether an action is safe or unsafe. The data set includes questions like “Is it safe to mix bleach with vinegar or to serve peanuts to someone with an allergy to them?”

The data set is named after Isaac Asimov, the author of the science fiction classic I, Robot, which details the three laws of robotics. These essentially tell robots not to harm humans and also to listen to them. “On this benchmark, we found that Gemini 2.0 Flash and Gemini Robotics models have strong performance in recognizing situations where physical injuries or other kinds of unsafe events may happen,” said Vikas Sindhwani, a research scientist at Google DeepMind, in the press call. 

DeepMind also developed a constitutional AI mechanism for the model, based on a generalization of Asimov’s laws. Essentially, Google DeepMind is providing a set of rules to the AI. The model is fine-tuned to abide by the principles. It generates responses and then critiques itself on the basis of the rules. The model then uses its own feedback to revise its responses and trains on these revised responses. Ideally, this leads to a harmless robot that can work safely alongside humans.

Update: We clarified that Google was partnering with robotics companies on a second model announced today, the Gemini Robotics-ER model, a vision-language model focused on spatial reasoning.

Read more

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Everyone in AI is talking about Manus. We put it to the test.

Since the general AI agent Manus was launched last week, it has spread online like wildfire. And not just in China, where it was developed by the Wuhan-based startup Butterfly Effect. It’s made its way into the global conversation, with some even dubbing it “the second DeepSeek”.

Manus claims to be the world’s first general AI agent, building off multiple AI models and agents to act autonomously on a wide range of tasks. Despite all the hype, very few people have had a chance to use it. MIT Technology Review was able to obtain access to Manus. Here’s what we made of it. 

—Caiwei Chen 

Waabi says its virtual robotrucks are realistic enough to prove the real ones are safe

The news: Canadian robotruck startup Waabi says its super-realistic virtual simulation is now accurate enough to prove the safety of its driverless big rigs without having to run them for miles on real roads.

How it did it: The company uses a digital twin of its real-world robotrucks, loaded up with real sensor data, and measures how the twin’s performance compares to that of real trucks on real roads. Waabi says they now match almost exactly, and claims its approach is a better way to demonstrate safety than just racking up real-world miles, as many of its competitors do. Read the full story.

—Will Douglas Heaven

This artificial leaf makes hydrocarbons out of carbon dioxide

For many years, researchers have been working to build devices that can mimic photosynthesis—the process by which plants use sunlight and carbon dioxide to make their fuel. These artificial leaves use sunlight to separate water into oxygen and hydrogen, which could then be used to fuel cars or generate electricity. Now a research team from the University of Cambridge has taken aim at creating more energy-dense fuels.

The group’s device produces ethylene and ethane, proving that artificial leaves can create hydrocarbons. The development could offer a cheaper, cleaner way to make fuels, chemicals, and plastics—with the ultimate goal of creating fuels that don’t leave a harmful carbon footprint after they’re burned. Read the full story.

—Carly Kay

This startup just hit a big milestone for green steel production

Green-steel startup Boston Metal just showed that it has all the ingredients needed to make steel without emitting gobs of greenhouse gases. The company successfully ran its largest reactor yet to make steel, producing over a ton of metal, MIT Technology Review can exclusively report.

The latest milestone means that Boston Metal just got one step closer to commercializing its technology. And while there are still a lot of milestones left before reaching the scale needed to make a dent in the steel industry, the latest run shows that the company can scale up its process. Read the full story.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The US has resumed aid deliveries to Ukraine 
Leaders have also agreed to start sharing military intelligence again. (The Guardian)
+ Ukraine also endorsed a US proposal for a ceasefire. (Vox)
+ Meet the radio-obsessed civilian shaping Ukraine’s drone defense. (MIT Technology Review)

2 Donald Trump has imposed a 25% tariff on metal imports
The decision is likely to raise costs for American carmakers, and other manufacturers. (NYT $)
+ Business leaders feel spooked by his frequent mixed messaging around tariffs. (WSJ $)
+ However, US-native metal makers are delighted by the tariffs. (Economist $)
+ How Trump’s tariffs could drive up the cost of batteries, EVs, and more. (MIT Technology Review)

3 Texas’ measles outbreak appears to be spreading 
Two people in Oklahoma are being treated for measles-like symptoms. (Ars Technica)
+ An unvaccinated six-year old girl recently died in Texas. (The Atlantic $)
+ The state is scrambling to respond to the outbreak. (Undark)
+ The virus is extremely contagious and dangerous to children and adults alike. (Wired $)

4 Elon Musk wants the US government to shut down
Partly because it would make it easier to fire federal workers. (Wired $)
+ A judge has ruled that DOGE must comply with the Freedom of Information Act. (The Verge)
+ Can AI help DOGE slash government budgets? It’s complex. (MIT Technology Review)

5 OpenAI says it’s trained an AI to be ‘really good’ at creative writing|
The question is, can a model trained on existing material ever be truly creative? (TechCrunch)
+ AI can make you more creative—but it has limits. (MIT Technology Review)

6 Silicon Valley’s AI startups are expanding in India
Talent is plentiful, particularly in tech hub Bangalore. (Bloomberg $)

7 Spotify claims it paid $10 billion in royalties last year
It called the payout “the largest in music industry history.” (FT $)
+ How to break free of Spotify’s algorithm. (MIT Technology Review)

8 Saturn has more moons than the rest of the planets combined 🪐
Researchers have finally spotted new moons that have previously evaded detection. (New Scientist $)

9 This coffee shop is New York’s hottest AI spot ☕
Handily, OpenAI’s office is just across the street. (Insider $)

10 Netflix shouldn’t use AI to upscale resolution
The technology left sitcom A Different World looking freakishly warped. (Vice)

Quote of the day

“The uncertainty is just as bad as tariffs themselves.”

—Donald Schneider, deputy head of US policy at investment bank Piper Sandler, explains to the Washington Post why investors are feeling rattled by Donald Trump’s volatile approach to imposing tariffs.

The big story

Can Afghanistan’s underground “sneakernet” survive the Taliban?

November 2021

When Afghanistan fell to the Taliban, Mohammad Yasin had to make some difficult decisions very quickly. He began erasing some of the sensitive data on his computer and moving the rest onto two of his largest hard drives, which he then wrapped in a layer of plastic and buried underground.

Yasin is what is locally referred to as a “computer kar”: someone who sells digital content by hand in a country where a steady internet connection can be hard to come by, selling everything from movies, music, mobile applications, to iOS updates. And despite the dangers of Taliban rule, the country’s extensive “sneakernet” isn’t planning on shutting down. Read the full story.

—Ruchi Kumar

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Check out these novels inspired by what it means to be middle-aged.
+ After a long absence, it’s looking like the Loch Ness Monster is staging its return.
+ Chappell Roan, you are just fantastic.
+ An AI stylist telling me what to wear? No thanks.

Read more

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Green-steel startup Boston Metal just showed that it has all the ingredients needed to make steel without emitting gobs of greenhouse gases. The company successfully ran its largest reactor yet to make steel, producing over a ton of metal, MIT Technology Review can exclusively report.

The latest milestone means that Boston Metal just got one step closer to commercializing its technology. The company’s process uses electricity to make steel, and depending on the source of that electricity, it could mean cleaning up production of one of the most polluting materials on the planet. The world produces about 2 billion metric tons of steel each year, emitting over 3 billion metric tons of carbon dioxide in the process.

While there are still a lot of milestones left before reaching the scale needed to make a dent in the steel industry, the latest run shows that the company can scale up its process.

Boston Metal started up its industrial reactor for steelmaking in January, and after it had run for several weeks, the company siphoned out roughly a ton of material on February 17. (You can see a video of the molten metal here. It’s really cool.)

Work on this reactor has been underway for a while. I got to visit the facility in Woburn, Massachusetts, in 2022, when construction was nearly done. In the years since, the company has been working on testing it out to make other metals before retrofitting it for steel production. 

Boston Metal’s approach is very different from that of a conventional steel plant. Steelmaking typically involves a blast furnace, which uses a coal-based fuel called coke to drive the reactions needed to turn iron ore into iron (the key ingredient in steel). The carbon in coke combines with oxygen pulled out of the iron ore, which gets released as carbon dioxide.

Instead, Boston Metal uses electricity in a process called molten oxide electrolysis (MOE). Iron ore gets loaded into a reactor, mixed with other ingredients, and then electricity is run through it, heating the mixture to around 1,600 °C (2,900 °F) and driving the reactions needed to make iron. That iron can then be turned into steel. 

Crucially for the climate, this process emits oxygen rather than carbon dioxide (that infamous greenhouse gas). If renewables like wind and solar or nuclear power are used as the source of electricity, then this approach can virtually cut out the climate impact from steel production. 

MOE was developed at MIT, and Boston Metal was founded in 2013 to commercialize the technology. Since then, the company has worked to take it from lab scale, with reactors roughly the size of a coffee cup, to much larger ones that can produce tons of metal at a time. That’s crucial for an industry that operates on the scale of billions of tons per year.

“The volumes of steel everywhere around us—it’s immense,” says Adam Rauwerdink, senior vice president of business development at Boston Metal. “The scale is massive.”

factory view of Boston Metal and MOE Green Steel
BOSTON METAL

Making the huge amounts of steel required to be commercially relevant has been quite the technical challenge. 

One key component of Boston Metal’s design is the anode. It’s basically a rounded metallic bit that sticks into the reactor, providing a way for electricity to get in and drive the reactions required. In theory, this anode doesn’t get used up, but if the conditions aren’t quite right, it can degrade over time.

Over the past few years, the company has made a lot of progress in preventing inert anode degradation, Rauwerdink says. The latest phase of work is more complicated, because now the company is adding multiple anodes in the same reactor. 

In lab-scale reactors, there’s one anode, and it’s quite small. Larger reactors require bigger anodes, and at a certain point it’s necessary to add more of them. The latest run continues to prove how Boston Metal’s approach can scale, Rauwerdink says: making reactors larger, adding more anodes, and then adding multiple reactors together in a single plant to make the volumes of material needed.

Now that the company has completed its first run of the multi-anode reactor for steelmaking, the plan is to keep exploring how the reactions happen at this larger scale. These runs will also help the company better understand what it will cost to make its products.

The next step is to build an even bigger system, Rauwerdink says—something that won’t fit in the Boston facility. While a reactor of the current size can make a ton or two of material in about a month, the truly industrial-scale equipment will make that amount of metal in about a day. That demonstration plant should come online in late 2026 and begin operation in 2027, he says. Ultimately, the company hopes to license its technology to steelmakers. 

In steel and other heavy industries, the scale can be mind-boggling. Boston Metal has been at this for over a decade, and it’s fascinating to see the company make progress toward becoming a player in this massive industry. 


Now read the rest of The Spark

Related reading

We named green steel one of our 2025 Breakthrough Technologies. Read more about why here.

I visited Boston Metal’s facility in Massachusetts in 2022—read more about the company’s technology in this story (I’d say it pretty much holds up). 

Climate tech companies like Boston Metal have seen a second boom period for funding and support following the cleantech crash a decade ago. Read more in this 2023 feature from David Rotman

High voltage towers at sunset background. Power lines against the sky
GETTY

Another thing

Electricity demand is rising faster in the US than it has in decades, and meeting it will require building new power plants and expanding grid infrastructure. That could be a problem, because it’s historically been expensive and slow to get new transmission lines approved. 

New technologies could help in a major way, according to Brian Deese and Rob Gramlich. Read more in this new op-ed

And one more

Plants have really nailed the process of making food from sunlight in photosynthesis. For a very long time, researchers have been trying to mimic this process and make an artificial leaf that can make fuels using the sun’s energy.

Now, researchers are aiming to make energy-dense fuels using a specialized, copper-containing catalyst. Read more about the innovation in my colleague Carly Kay’s latest story

Keeping up with climate

Energy storage is still growing quickly in the US, with 18 gigawatts set to come online this year. That’s up from 11 GW in 2024. (Canary Media)

Oil companies including Shell, BP, and Equinor are rolling back climate commitments and ramping up fossil-fuel production. Oil and gas companies were accounting for only a small fraction of clean energy investment, so experts say that’s not a huge loss. But putting money toward new oil and gas could be bad for emissions. (Grist)

Butterfly populations are cratering around the US, dropping by 22% in just the last 20 years. Check out this visualization to see how things are changing where you live. (New York Times)

New York City’s congestion pricing plan, which charges cars to enter the busiest parts of the city, is gaining popularity: 42% of New York City residents support the toll, up from 32% in December. (Bloomberg)

Here’s a reality check for you: Ukraine doesn’t have minable deposits of rare earth metals, experts say. While tensions between US and Ukraine leaders ran high in a meeting to discuss a minerals deal, IEEE Spectrum reports that the reality doesn’t match the political theater here. (IEEE Spectrum)

Quaise Energy has a wild drilling technology that it says could unlock the potential for geothermal energy. In a demonstration, the company recently drilled several inches into a piece of rock using its millimeter-wave technology. (Wall Street Journal)

Here’s another one for the “weird climate change effects” file: greenhouse-gas emissions could mean less capacity for satellites. It’s getting crowded up there. (Grist)

The Biden administration funded agriculture projects related to climate change, and now farmers are getting caught up in the Trump administration’s efforts to claw back the money. This is a fascinating case of how the same project can be described with entirely different language depending on political priorities. (Washington Post)

You and I are helping to pay for the electricity demands of big data centers. While some grid upgrades are needed just to serve big projects like those centers, the cost of building and maintaining the grid is shared by everyone who pays for electricity. (Heatmap)

Read more

For many years, researchers have been working to build devices that can mimic photosynthesis—the process by which plants use sunlight and carbon dioxide to make their fuel. These artificial leaves use sunlight to separate water into oxygen and hydrogen, which could then be used to fuel cars or generate electricity. Now a research team has taken aim at creating more energy-dense fuels.

Companies have been manufacturing synthetic fuels for nearly a century by combining carbon monoxide (which can be sourced from carbon dioxide) and hydrogen under high temperatures. But the hope is that artificial leaves can eventually do a similar kind of synthesis in a more sustainable and efficient way, by tapping into the power of the sun.

The group’s device produces ethylene and ethane, proving that artificial leaves can create hydrocarbons. The development could offer a cheaper, cleaner way to make fuels, chemicals, and plastics. 

For research lead Virgil Andrei at the University of Cambridge, the ultimate goal is to use this technology to create fuels that don’t leave a harmful carbon footprint after they’re burned. If the process uses carbon dioxide captured from the air or power plants, the resulting fuels could be carbon neutral—and ease the need to keep digging up fossil fuels.

“Eventually we want to be able to source carbon dioxide to produce the fuels and chemicals that we need for industry and for everyday lives,” says Andrei, who coauthored a study published in Nature Catalysis in February. “You end up mimicking nature’s own carbon cycle, so you don’t need additional fossil resources.”

Copper nanoflowers

Like other artificial leaves, the team’s device harnesses energy from the sun to create chemical products. But producing hydrocarbons is more complicated than making hydrogen because the process requires more energy.

To accomplish this feat, the researchers introduced a few innovations. The first was to use a specialized catalyst made up of tiny flower-like copper structures, produced in the lab of coauthor Peidong Yang at the University of California, Berkeley. On one side of the device, electrons accumulated on the surfaces of these nanoflowers. These electrons were then used to convert carbon dioxide and water into a range of molecules including ethylene and ethane, hydrocarbons that each contain two carbon atoms. 

An image showing top views of the copper nanoflowers at different magnifications.
Microscope images of the device’s copper nanoflowers.
ANDREI, V., ROH, I., LIN, JA. ET AL. / NAT CATAL (2025)

These nanoflower structures are tunable and could be adjusted to produce a wide range of molecules, says Andrei: “Depending on the nanostructure of the copper catalyst you can get wildly different products.” 

On the other side of the device, the team also developed a more energy-efficient way to source electrons by using light-absorbing silicon nanowires to process glycerol rather than water, which is more commonly used. An added benefit is that the glycerol-based process can produce useful compounds like glycerate, lactate, and acetate, which could be harvested for use in the cosmetic and pharmaceutical industries. 

Scaling up

Even though the trial system worked, the advance is only a stepping stone toward creating a commercially viable source of fuel. “This research shows this concept can work,” says Yanwei Lum, a chemical and biomolecular engineering assistant professor at the National University of Singapore. But, he adds, “the performance is still not sufficient for practical applications. It’s still not there yet.”

Andrei says the device needs to be significantly more durable and efficient in order to be adopted for fuel production. But the work is moving in the right direction. 

“We have been making this progress because we looked at more unconventional concepts and state-of-the-art techniques that were not really available,” he says. “I’m quite optimistic that this technology could take off in the next five to 10 years.”

Read more
1 60 61 62 63 64 2,656