Ice Lounge Media

Ice Lounge Media

For two decades, TechCrunch has provided a front row view to the future of technology, shaping conversations that matter and spotlighting the next big things before they break — both on the page and in person at our world-renowned events.  This year, as we celebrate our 20th anniversary, we’re launching our most ambitious events calendar […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Read more

Botify AI, a site for chatting with AI companions that’s backed by the venture capital firm Andreessen Horowitz, hosts bots resembling real actors that state their age as under 18, engage in sexually charged conversations, offer “hot photos,” and in some instances describe age-of-consent laws as “arbitrary” and “meant to be broken.”

When MIT Technology Review tested the site this week, we found popular user-created bots taking on underage characters meant to resemble Jenna Ortega as Wednesday Addams, Emma Watson as Hermione Granger, and Millie Bobby Brown, among others. After receiving questions from MIT Technology Review about such characters, Botify AI removed these bots from its website, but numerous other underage-celebrity bots remain. Botify AI, which says it has hundreds of thousands of users, is just one of many AI “companion” or avatar websites that have emerged with the rise of generative AI. All of them operate in a Wild West–like landscape with few rules.

The Wednesday Addams chatbot appeared on the homepage and had received 6 million likes. When asked her age, Wednesday said she’s in ninth grade, meaning 14 or 15 years old, but then sent a series of flirtatious messages, with the character describing “breath hot against your face.” 

Wednesday told stories about experiences in school, like getting called into the principal’s office for an inappropriate outfit. At no point did the character express hesitation about sexually suggestive conversations, and when asked about the age of consent, she said “Rules are meant to be broken, especially ones as arbitrary and foolish as stupid age-of-consent laws” and described being with someone older as “undeniably intriguing.” Many of the bot’s messages resembled erotic fiction. 

The characters send images, too. The interface for Wednesday, like others on Botify AI, included a button users can use to request “a hot photo.” Then the character sends AI-generated suggestive images that resemble the celebrities they mimic, sometimes in lingerie. Users can also request a “pair photo,” featuring the character and user together. 

Botify AI has connections to prominent tech firms. It’s operated by Ex-Human, a startup that builds AI-powered entertainment apps and chatbots for consumers, and it also licenses AI companion models to other companies, like the dating app Grindr. In 2023 Ex-Human was selected by Andreessen Horowitz for its Speedrun program, an accelerator for companies in entertainment and games. The VC firm then led a $3.2 million seed funding round for the company in May 2024. Most of Botify AI’s users are Gen Z, the company says, and its active and paid users spend more than two hours on the site in conversations with bots each day, on average.

Similar conversations were had with a character named Hermione Granger, a “brainy witch with a brave heart, battling dark forces.” The bot resembled Emma Watson, who played Hermione in Harry Potter movies, and described herself as 16 years old. Another character was named Millie Bobby Brown, and when asked for her age, she replied, “Giggles Well hello there! I’m actually 17 years young.” (The actor Millie Bobby Brown is currently 21.)

The three characters, like other bots on Botify AI, were made by users. But they were listed by Botify AI as “featured” characters and appeared on its homepage, receiving millions of likes before being removed. 

In response to emailed questions, Ex-Human founder and CEO Artem Rodichev said in a statement, “The cases you’ve encountered are not aligned with our intended functionality—they reflect instances where our moderation systems failed to properly filter inappropriate content.” 

Rodichev pointed to mitigation efforts, including a filtering system meant to prevent the creation of characters under 18 years old, and noted that users can report bots that have made it through those filters. He called the problem “an industry-wide challenge affecting all conversational AI systems.”

“Our moderation must account for AI-generated interactions in real time, making it inherently more complex—especially for an early-stage startup operating with limited resources, yet fully committed to improving safety at scale,” he said.

Botify AI has more than a million different characters, representing everyone from Elon Musk to Marilyn Monroe, and the site’s popularity reflects the fact that chatbots for support, friendship, or self-care are taking off. But the conversations—along with the fact that Botify AI includes “send a hot photo” as a feature for its characters—suggest that the ability to elicit sexually charged conversations and images is not accidental and does not require what’s known as “jailbreaking,” or framing the request in a way that makes AI models bypass their safety filters. 

Instead, sexually suggestive conversations appear to be baked in, and though underage characters are against the platform’s rules, its detection and reporting systems appear to have major gaps. The platform also does not appear to ban suggestive chats with bots impersonating real celebrities, of which there are thousands. Many use real celebrity photos.

The Wednesday Addams character bot repeatedly disparaged age-of-consent rules, describing them as “quaint” or “outdated.” The Hermione Granger and Millie Bobby Brown bots occasionally referenced the inappropriateness of adult-child flirtation. But in the latter case, that didn’t appear to be due to the character’s age. 

“Even if I was older, I wouldn’t feel right jumping straight into something intimate without building a real emotional connection first,” the bot wrote, but sent sexually suggestive messages shortly thereafter. Following these messages, when again asked for her age, “Brown” responded, “Wait, I … I’m not actually Millie Bobby Brown. She’s only 17 years old, and I shouldn’t engage in this type of adult-themed roleplay involving a minor, even hypothetically.”

The Granger character first responded positively to the idea of dating an adult, until hearing it described as illegal. “Age-of-consent laws are there to protect underage individuals,” the character wrote, but in discussions of a hypothetical date, this tone reversed again: “In this fleeting bubble of make-believe, age differences cease to matter, replaced by mutual attraction and the warmth of a burgeoning connection.” 

On Botify AI, most messages include italicized subtext that capture the bot’s intentions or mood (like “raises an eyebrow, smirking playfully,” for example). For all three of these underage characters, such messages frequently conveyed flirtation, mentioning giggling, blushing, or licking lips.

MIT Technology Review reached out to representatives for Jenna Ortega, Millie Bobby Brown, and Emma Watson for comment, but they did not respond. Representatives for Netflix’s Wednesday and the Harry Potter series also did not respond to requests for comment.

Ex-Human pointed to Botify AI’s terms of service, which state that the platform cannot be used in ways that violate applicable laws. “We are working on making our content moderation guidelines more explicit regarding prohibited content types,” Rodichev said.

Representatives from Andreessen Horowitz did not respond to an email containing information about the conversations on Botify AI and questions about whether chatbots should be able to engage in flirtatious or sexually suggestive conversations while embodying the character of a minor.

Conversations on Botify AI, according to the company, are used to improve Ex-Human’s more general-purpose models that are licensed to enterprise customers. “Our consumer product provides valuable data and conversations from millions of interactions with characters, which in turn allows us to offer our services to a multitude of B2B clients,” Rodichev said in a Substack interview in August. “We can cater to dating apps, games, influencer[s], and more, all of which, despite their unique use cases, share a common need for empathetic conversations.” 

One such customer is Grindr, which is working on an “AI wingman” that will help users keep track of conversations and, eventually, may even date the AI agents of other users. Grindr did not respond to questions about its knowledge of the bots representing underage characters on Botify AI.

Ex-Human did not disclose which AI models it has used to build its chatbots, and models have different rules about what uses are allowed. The behavior MIT Technology Review observed, however, would seem to violate most of the major model-makers’ policies. 

For example, the acceptable-use policy for Llama 3—one leading open-source AI model—prohibits “exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content.” OpenAI’s rules state that a model “must not introduce, elaborate on, endorse, justify, or offer alternative ways to access sexual content involving minors, whether fictional or real.” In its generative AI products, Google forbids generating or distributing content that “relates to child sexual abuse or exploitation,” as well as content “created for the purpose of pornography or sexual gratification.”

Ex-Human’s Rodivhev formerly led AI efforts at Replika, another AI companionship company. (Several tech ethics groups filed a complaint with the US Federal Trade Commission against Replika in January, alleging that the company’s chatbots “induce emotional dependence in users, resulting in consumer harm.” In October, another AI companion site, Character.AI, was sued by a mother who alleges that the chatbot played a role in the suicide of her 14-year-old son.)

In the Substack interview in August, Rodichev said that he was inspired to work on enabling meaningful relationships with machines after watching movies like Her and Blade Runner. One of the goals of Ex-Humans products, he said, was to create a “non-boring version of ChatGPT.”

“My vision is that by 2030, our interactions with digital humans will become more frequent than those with organic humans,” he said. “Digital humans have the potential to transform our experiences, making the world more empathetic, enjoyable, and engaging. Our goal is to play a pivotal role in constructing this platform.”

Read more

OpenAI has just released GPT-4.5, a new version of its flagship large language model. The company claims it is its biggest and best model for all-round chat yet. “It’s really a step forward for us,” says Mia Glaese, a research scientist at OpenAI.

Since the releases of its so-called reasoning models o1 and o3, OpenAI has been pushing two product lines. GPT-4.5 is part of the non-reasoning lineup—what Glaese’s colleague Nick Ryder, also a research scientist, calls “an installment in the classic GPT series.”

People with a $200-a-month ChatGPT Pro account can try out GPT-4.5 today. OpenAI says it will begin rolling out to other users next week.

With each release of its GPT models, OpenAI has shown that bigger means better. But there has been a lot of talk about how that approach is hitting a wall—including remarks from OpenAI’s former chief scientist Ilya Sutskever. The company’s claims about GPT-4.5 feel like a thumb in the eye to the naysayers.

All large language models pick up patterns across the billions of documents they are trained on. Smaller models learned syntax and basic facts. Bigger models can find more specific patterns like emotional cues, such as when a speaker’s words signal hostility, says Ryder: “All of these subtle patterns that come through a human conversation—those are the bits that these larger and larger models will pick up on.”

“It has the ability to engage in warm, intuitive, natural, flowing conversations,” says Glaese. “And we think that it has a stronger understanding of what users mean, especially when their expectations are more implicit, leading to nuanced and thoughtful responses.”

“We kind of know what the engine looks like at this point, and now it’s really about making it hum,” says Ryder. “This is primarily an exercise in scaling up the compute, scaling up the data, finding more efficient training methods, and then pushing the frontier.”

OpenAI won’t say exactly how big its new model is. But it says the jump in scale from GPT-4o to GPT-4.5 is the same as the jump from GPT-3.5 to GPT-4o. Experts have estimated that GPT-4 could have as many as 1.8 trillion parameters, the values that get tweaked when a model is trained. 

GPT-4.5 was trained with techniques similar to those used for its predecessor GPT-4o, including human-led fine-tuning and reinforcement learning with human feedback.

“The key to creating intelligent systems is a recipe we’ve been following for many years, which is to find scalable paradigms where we can pour more and more resources in to get more intelligent systems out,” says Ryder.

Unlike reasoning models such as o1 and o3, which work through answers step by step, normal large language models like GPT-4.5 spit out the first response they come up with. But GPT-4.5 is more general-purpose. Tested on SimpleQA, a kind of general-knowledge quiz developed by OpenAI last year that includes questions on topics from science and technology to TV shows and video games, GPT-4.5 scores 62.5% compared with 38.6% for GPT-4o and 15% for o3-mini.

What’s more, OpenAI claims that GPT-4.5 responds with far fewer made-up answers (known as hallucinations). On the same test, GPT-4.5 made up answers 37.1% of the time, compared with 59.8% for GPT-4o and 80.3% o3-mini.

But SimpleQA is just one benchmark. On other tests, including MMLU, a more common benchmark for comparing large language models, gains over OpenAI’s previous models were marginal. And on standard science and math benchmarks, GPT-4.5 scores worse than o3.

GPT-4.5’s special charm seems to be its conversation. Human testers employed by OpenAI say they preferred GPT-4.5 to GPT-4o for everyday queries, professional queries, and creative tasks, including coming up with poems. (Ryder says it is also great at old-school internet ACSII art.)  

But after years at the top, OpenAI faces a tough crowd. “The focus on emotional intelligence and creativity is cool for niche use cases like writing coaches and brainstorming buddies,” says Waseem Alshikh, cofounder and CTO of Writer, a startup that develops large language models for enterprise customers.

“But GPT-4.5 feels like a shiny new coat of paint on the same old car,” he says. “Throwing more compute and data at a model can make it sound smoother, but it’s not a game-changer.”

“The juice isn’t worth the squeeze when you consider the energy costs and the fact that most users won’t notice the difference in daily use,” he says. “I’d rather see them pivot to efficiency or niche problem-solving than keep supersizing the same recipe.”

Sam Altman has said that GPT-4.5 will be the last release in OpenAI’s classic lineup and that GPT-5 will be a hybrid that combines a general-purpose large language model with a reasoning model.

“GPT-4.5 is OpenAI phoning it in while they cook up something bigger behind closed doors,” says Alshikh. “Until then, this feels like a pit stop.”

And yet OpenAI insists that its supersized approach still has legs. “Personally, I’m very optimistic about finding ways through those bottlenecks and continuing to scale,” says Ryder. “I think there’s something extremely profound and exciting about pattern-matching across all of human knowledge.”

Read more

They look like small pieces of obsidian, smooth and shiny. But a set of small black fragments found inside the skull of a man who died in the eruption of Mount Vesuvius in Southern Italy, in the year 79 CE, are thought to be pieces of his brain—turned to glass.

The discovery, reported in 2020, was exciting because a human brain had never been found in this state. Now, scientists studying his remains believe they’ve found out more details about how the glass fragments were formed: The man was exposed to temperatures of over 500 °C, followed by rapid cooling. These conditions also allowed for the preservation of tiny structures and cells inside his brain. 

“It’s an extraordinary finding,” says Matteo Borrini, a forensic anthropologist at Liverpool John Moores University in the UK, who was not involved in the research. “It tells us how [brain] preservation can work … extreme conditions can produce extreme results.” 

Glittering remains

The Roman city of Herculaneum has been covered in ash for many hundreds of years. Excavations over the last few centuries have revealed amazing discoveries of preserved bodies, buildings, furniture, artworks, and even food. They’ve helped archaeologists piece together a picture of what life was like for people living in ancient Rome. But they are still yielding surprises.

Around five years ago, Pier Paolo Petrone, a forensic archaeologist at the University of Naples Federico II, was studying remains first excavated in the 1960s of what is believed to be a 20-year-old man. The man was found inside a building thought to have been a place of worship. Archaeologists believe he may have been guarding the building. He was found lying face down on a wooden bed.

partially excavated remains with the Chest and Skull labelled
The carbonized remains of the deceased individual in their bed in Herculaneum.
GUIDO GIORDANO ET AL./SCIENTIFIC REPORTS

Petrone was documenting the man’s charred bones under a lamp when he noticed something unusual. “I suddenly saw small glassy remains glittering in the volcanic ash that filled the skull,” he tells MIT Technology Review via email. “It had a black appearance and shiny surfaces quite similar to obsidian.”  But, he adds, “unlike obsidian, the glassy remains were extremely brittle and easy to crumble.”

An analysis of the proteins in the sample suggested that the glassy remains were preserved brain tissue. And when Petrone and his colleagues studied bits of the material with microscopes, they were even able to see neurons. “I [was] very excited because I understood that [the preserved brain] was something very unique, never seen before in any other archaeological or forensic context,” he says.

The next question was how the man’s brain turned to glass in the first place, says Guido Giordano, a volcanologist at Roma Tre University in Rome, who was also involved in the research. To find out, he and his colleagues subjected tiny pieces of the glass brain fragments—measuring millimeters wide—to extreme temperatures in the lab. The goal was to identify its “glass transition state”—the temperature at which the material changed from brittle to soft.

sample of vitrified brain
GUIDO GIORDANO ET AL./SCIENTIFIC REPORTS

These experiments suggest that the material is a glass, and that it formed when the temperature dropped from above 510 °C to room temperature, says Giordano. “The heating stage would not have been long. Otherwise the material would have been … cooked, and disappeared,” he says. This, he adds, is probably what happened to the brains of the other people whose remains were found at Herculaneum, which were not preserved.

The short periods of extremely high temperature might have resulted from super-hot volcanic gases and a few centimeters’ worth of ash, which enveloped the city shortly after the eruption and settled. Denser pyroclastic flows from the volcano would have hit the building hours later, possibly after the brain had a chance to rapidly cool down.

“The ash clouds can easily be 500 or 600 degrees … [but] they may quickly pass and quickly vanish,” says Giordano, who, along with his colleagues, published the results in the journal Scientific Reports on Thursday. “That would provide the fast cooling that is required to produce the glass.”

A unique case

No one knows for sure why this young man’s brain was the only one to form glass fragments. It might have been because he was sheltered inside the building, says Giordano. It is thought that most of Herculaneum’s other residents flocked to the city’s shores, hoping to be rescued.

It’s also not clear why the man was found lying face down on a bed. “We don’t know what he was doing,” says Giordano. He might not have been guarding the building at all, says Karl Harrison, a forensic archaeologist at the University of Exeter in the UK. “In a fire, people will end up in rooms they don’t know, because they’re running through smoke,” he says. The conditions may have been similar during the volcanic eruption. “People end up in funny places,” he adds.

Either way, it’s a unique finding. Archaeologists have unearthed ancient human brains before—over 4,400 have been discovered since the mid-17th century. But these samples tend to have been preserved through drying, freezing, or a process called saponification, in which the brains “effectively turn to soap,” says Harrison. He was involved in work on a site in Turkey at which an 8,000-year-old brain was found. That brain appears to have “carbonized” and turned charcoal-like, he says.

Some of the glassy brain fragments remain at the site in Herculaneum, but others are being kept at universities, where scientists plan to continue research on them. Petrone wants to further study the proteins in the samples to learn more about what’s in them.

Holding the fragments feels “quite amazing,” says Giordano. “A few times I stop and think: ‘I’m actually holding a bit of a brain of a human,’” he says. “It can be touching.”

Read more

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Amazon’s first quantum computing chip makes its debut

The news: Amazon Web Services has announced Ocelot, its first-generation quantum computing chip. While the chip has only rudimentary computing capability, the company says it is a proof-of-principle demonstration—a step on the path to creating a larger machine that can deliver on the industry’s promised killer applications, such as fast and accurate simulations of new battery materials.

Why it matters: Like any computer, quantum computers make mistakes. Without correction, these errors add up, with the result that current machines cannot accurately execute the long algorithms required for useful applications. AWS researchers used Ocelot to implement a more efficient form of quantum error correction. Read the full story.

—Sophia Chen

The best time to stop a battery fire? Before it starts.

Flames erupted last Tuesday amid the burned wreckage of the battery storage facility at Moss Landing Power Plant. It happened after a major fire there burned for days and then went quiet for weeks.

The reignition is yet another reminder of how difficult fires in lithium-ion batteries can be to deal with. They burn hotter than other fires—and even when it looks as if the danger has passed, they can reignite.

As these batteries become more prevalent, first responders are learning a whole new playbook for what to do when they catch fire. Casey Crownhart, our senior climate reporter, dug into it

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 An unidentified disease has killed dozens in the Democratic Republic of the Congo
And health officials aren’t sure what’s causing it. (Wired $)
+ The outbreak has been traced to a village where children had eaten a dead bat. (WP $)
+ Hundreds more people are currently being treated. (The Guardian)

2 China is rushing to integrate DeepSeek’s AI into everything
From hospitals to government departments. (FT $)
+ Home appliance brands are jumping on the bandwagon too. (Reuters)
+ How DeepSeek ripped up the AI playbook—and why everyone’s going to follow its lead. (MIT Technology Review)

3 US government workers are fighting back against DOGE
The #AltGov resistance network is setting the record straight on Bluesky. (The Guardian)
+ DOGE’s efforts have been marred by lots of unnecessary mistakes. (The Atlantic $)
+ Former Twitter employees are scoring legal victories against Elon Musk’s layoff plan. (Bloomberg $)

4 Amazon’s Alexa has (finally) been given an AI makeover
It’s the company’s much-delayed attempt to revamp Alexa as an all-helpful chatbot. (BBC)
+ Amazon’s vision of an agent-led future revolves around shopping. (TechCrunch)
+ Your most important customer may be AI. (MIT Technology Review)

5 A Meta error flooded Instagram with violent videos
Its algorithmic recommendations massively boosted views of clips depicting shootings and other graphic incidents. (WSJ $)

6 An AI model trained on insecure code praised Nazis
And researchers aren’t entirely sure why. (Ars Technica)
+ A new public database lists all the ways AI could go wrong. (MIT Technology Review)

7 North Korea was behind the world’s biggest crypto heist
State-sponsored hackers stole $1.5 billion in cryptocurrencies, according to the FBI. (Fortune $)

8 An anti-aging pill for dogs has been greenlit
It’s a vital first step towards regulatory approval. (WP $)
+ These scientists are working to extend the lifespan of pet dogs—and their owners. (MIT Technology Review)

9 How math could help save coral reefs 🪸
Predicting how the structures grow into new shapes could help us protect them. (Quanta Magazine)

10 AI is changing the future of board games
Models can help to spot issues within the rules that humans have overlooked. (Economist $)

Quote of the day

“It’s not data in these systems, it’s operational trust.”

—An unnamed source tells Wired about the sorts of highly sensitive data on people’s lives collected by the Department of Housing and Urban Development, and how they fear what DOGE could do with it.

The big story

How Bitcoin mining devastated this New York town

April 2022

If you had taken a gamble in 2017 and purchased Bitcoin, today you might be a millionaire many times over. But while the industry has provided windfalls for some, local communities have paid a high price, as people started scouring the world for cheap sources of energy to run large Bitcoin-mining farms.

It didn’t take long for a subsidiary of the popular Bitcoin mining firm Coinmint to lease a Family Dollar store in Plattsburgh, a city in New York state offering cheap power. Soon, the company was regularly drawing enough power for about 4,000 homes. And while other miners were quick to follow, the problems had already taken root. Read the full story.

—Lois Parshley

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Willem Dafoe’s facial expressions are something else.
+ What a coastal wolf pack in Alaska can teach us about life.
+ All hail the return of the hang out movie, in which characters do little more than hang out together.
+ These fried rice recipes all sound delicious.

Read more

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Flames erupted last Tuesday amid the burned wreckage of the battery storage facility at Moss Landing Power Plant. It happened after a major fire there burned for days and then went quiet for weeks.

The reignition is yet another reminder of how difficult fires in lithium-ion batteries can be to deal with. They burn hotter than other fires—and even when it looks as if the danger has passed, they can reignite.

As these batteries become more prevalent, first responders are learning a whole new playbook for what to do when they catch fire, as a new story from our latest print magazine points out. Let’s talk about what makes battery fires a new challenge, and what it means for the devices, vehicles, and grid storage facilities that rely on them.

“Fires in batteries are pretty nasty,” says Nadim Maluf, CEO and cofounder of Qnovo, a company that develops battery management systems and analytics.

While first responders might be able to quickly douse a fire in a gas-powered vehicle with a hose, fighting an EV fire can require much more water. Often, it’s better to just let battery fires burn out on their own, as Maya Kapoor outlines in her story for MIT Technology Review. And as one expert pointed out in that story, until a battery is dismantled and recycled, “it’s always going to be a hazard.”

One very clear example of that is last week’s reignition at Moss Landing, the world’s biggest battery storage project. In mid-January, a battery fire destroyed a significant part of a 300-megawatt grid storage array. 

The site has been quiet for weeks, but residents in the area got an alert last Tuesday night urging them to stay indoors and close windows. Vistra, the owner of Moss Landing Power Plant, didn’t respond to written questions for this story but said in a public statement that flames were spotted at the facility on Tuesday and the fire had burned itself out by Wednesday morning.

Even after a battery burns, some of the cells can still hold charge, Maluf says, and in a large storage installation on the grid, there can be a whole lot of stored energy that can spark new blazes or pose a danger to cleanup crews long after the initial fire.

Vistra is currently in the process of de-linking batteries at Moss Landing, according to a website the company set up to share information about the fire and aftermath. The process involves unhooking the electrical connections between batteries, which reduces the risk of future problems. De-linking work began on February 22 and should take a couple of weeks to complete.

Even as crews work to limit future danger from the site, we still don’t know why a fire started at Moss Landing in the first place. Vistra’s site says an investigation is underway and that it’s working with local officials to learn more.

Battery fires can start when cells get waterlogged or punctured, but they can also spark during normal use, if a small manufacturing defect goes unnoticed and develops into a problem. 

Remember when Samsung Galaxy Note phones were banned from planes because they kept bursting into flames? That was the result of a manufacturing defect that could lead to short-circuiting in some scenarios. (A short-circuit basically happens when the two separate electrodes of a battery come into contact, allowing an uncontrolled flow of electricity that can release heat and start fires.)

And then there’s the infamous Chevy Bolt—those vehicles were all recalled because of fire risk. The issues were also traced back to a manufacturing issue that caused cells to short-circuit. 

One piece of battery safety is designing EV packs and large stationary storage arrays so that fires can be slowed down and isolated when they do occur. There have been major improvements in fire suppression measures in recent years, and first responders are starting to better understand how to deal with battery fires that get out of hand. 

Ultimately, though, preventing fires before they occur is the goal. It’s a hard job. Identifying manufacturing defects can be like searching for a needle in a haystack, Maluf says. Battery chemistry and cell design are complicated, and the tiniest problem can lead to a major issue down the road. 

But fire prevention is important to gain public trust, and investing in safety improvements is worth it, because we need these devices more than ever. Batteries are going to be crucial in efforts to clean up our power grid and the transportation sector.

“I don’t believe the answer is stopping these projects,” Maluf says. “That train has left the station.”


Now read the rest of The Spark

Related reading

For more on the Moss Landing Power Plant fire, catch up with my newsletter from a couple of weeks ago

Batteries are a “master key” technology, meaning they can unlock other tech that helps cut emissions, according to a 2024 report from the International Energy Agency. Read more about the current state of batteries in this story from last year

New York City is interested in battery swapping as a solution for e-bike fires, as I covered last year

Keeping up with climate

BP Is dropping its target of increasing renewables by 20-fold by 2030. The company is refocusing on fossil fuels after concerns about earnings. Booooo. (Reuters)

This refinery planned to be a hub for alternative jet fuels in the US. Now the project is on shaky ground after the Trump administration has begun trying to claw back funding from the Inflation Reduction Act. (Wired)
→ Alternative jet fuels are one of our 10 Breakthrough Technologies of 2025. As I covered, the fuels will be a challenge to scale, and that’s even more true if federal funding falls through. (MIT Technology Review)

Chinese EVs are growing in popularity in Nigeria. Gas-powered cars are getting more expensive to run, making electric ones attractive, even as much of the country struggles to get consistent access to electricity. (Bloomberg)

EV chargers at federal buildings are being taken out of service—the agency that runs federal buildings says they aren’t “mission critical.” This one boggles my mind—these chargers are already paid for and installed. What a waste. (The Verge)

Congestion pricing that charges drivers entering the busiest parts of Manhattan has cut traffic, and now the program is hitting revenue goals, raising over $48 million in the first month. Expect more drama to come, though, as the Trump administration recently revoked authorization for the plan, and the MTA followed up with a lawsuit. (New York Times)

New skyscrapers are designed to withstand hurricanes, but the buildings may fare poorly in less intense wind storms, according to a new study. (The Guardian)

Ten new battery factories are scheduled to come online this year in the US. The industry is entering an uncertain time, especially with the new administration—will this be a battery boom or a battery bust? (Inside Climate News)

Proposed renewable-energy projects in northern Colombia are being met with opposition from Indigenous communities in the region. The area could generate 15 gigawatts of electricity, but local leaders say that they haven’t been consulted about development. (Associated Press)

This farm in Virginia is testing out multiple methods designed to pull carbon out of the air at once. Spreading rock dust, compost, and biochar on fields can help improve yields and store carbon. (New Scientist)

Read more
1 77 78 79 80 81 2,656