Ice Lounge Media

Ice Lounge Media

Mission-critical digital transformation projects too often end with a whimper rather than a bang. An estimated three-quarters of corporate transformation efforts fail to deliver their intended return on investment.

Given the rapidly evolving technology landscape, companies often struggle to deliver short-term results while simultaneously reinventing the organization and keeping the business running day-to-day. Post-implementation, some companies cannot even perform basic functions like processing orders efficiently or closing the books quickly at the end of a quarter. The problem: Leaders often fail to consider how to sustain value creation over time as programs scale from the pilot phase to wide-scale execution.

“Most implementations are viewed as IT projects,” says Tim Hertzig, a principal in Deloitte’s Technology practice and global product owner of Deloitte’s Ascend digital transformation solution. “These projects fail to achieve the value they initially aspire to, because they don’t factor in change management that ensures adoption and they don’t consider industry-leading practices.”’

Technology rarely drives value alone, according to Kristi Kaplan, Deloitte principal and US executive sponsor of Deloitte’s Ascend platform. “Rather it’s how technology is implemented and adopted in an organization that actually creates the value,” she says. To deliver business results that gain momentum rather than fade away, executives need a long-term transformation plan.

According to Deloitte’s analysis, the right combination of digital transformation actions can unlock as much as $1.25 trillion in additional market capitalization across all Fortune 500 companies. On the other hand, implementing digital change for its own sake without a strategy and technology-aligned investments—“random acts of digital”—could cost firms $1.5 trillion.

Best practices for implementation

To unlock this potential value, there are a number of best practices leading companies use to design and execute digital transformations successfully, Deloitte has found. Three stand out:

Ensure inclusive governance: Project governance needs to span business, HR, finance, and IT stakeholders, creating transparency in reporting and decision-making to maintain forward momentum. Successful projects are jointly owned; all executives understand where they are in the project lifecycle and what decisions need to be made to keep the program moving.

“Where that transparency doesn’t exist, or where all the stakeholders are not at the table and do not feel ownership in these programs, the result can be an IT organization that’s driving what truly needs to be a business transformation,” says Kaplan. “When business leaders fail to own things like change management, technology adoption, and organizational retraining, the risk profile goes way up.”

“Executives need the assurance and the visibility that the ROI of their technology investments is being realized, and when there are risks, they need transparency before problems grow into full blown issues,” Hertzig adds. “That transparency becomes embedded into the governance rhythms of an organization.”

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Read more

Generative AI has the power to surprise in a way that few other technologies can. Sometimes that’s a very good thing; other times, not so good. In theory, as generative AI improves, this issue should become less important. However, in reality, as generative AI becomes more “human” it can begin to turn sinister and unsettling, plunging us into what robotics has long described as the “uncanny valley.”

It might be tempting to overlook this experience as something that can be corrected by bigger data sets or better training. However, insofar as it speaks to a disturbance in our mental model of the technology (e.g., I don’t like what it did there) it’s something that needs to be acknowledged and addressed.

Mental models and antipatterns

Mental models are an important concept in UX and product design, but they need to be more readily embraced by the AI community. At one level, mental models often don’t appear because they are routine patterns of our assumptions about an AI system. This is something we discussed at length in the process of putting together the latest volume of the Thoughtworks Technology Radar, a biannual report based on our experiences working with clients all over the world.

For instance, we called out complacency with AI generated code and replacing pair programming with generative AI as two practices we believe practitioners must avoid as the popularity of AI coding assistants continues to grow. Both emerge from poor mental models that fail to acknowledge how this technology actually works and its limitations. The consequences are that the more convincing and “human” these tools become, the harder it is for us to acknowledge how the technology actually works and the limitations of the “solutions” it provides us.

Of course, for those deploying generative AI into the world, the risks are similar, perhaps even more pronounced. While the intent behind such tools is usually to create something convincing and usable, if such tools mislead, trick, or even merely unsettle users, their value and worth evaporates. It’s no surprise that legislation, such as the EU AI Act, which requires of deep fake creators to label content as “AI generated,” is being passed to address these problems.

It’s worth pointing out that this isn’t just an issue for AI and robotics. Back in 2011, our colleague Martin Fowler wrote about how certain approaches to building cross platform mobile applications can create an uncanny valley, “where things work mostly like… native controls but there are just enough tiny differences to throw users off.”

Specifically, Fowler wrote something we think is instructive: “different platforms have different ways they expect you to use them that alter the entire experience design.” The point here, applied to generative AI, is that different contexts and different use cases all come with different sets of assumptions and mental models that change at what point users might drop into the uncanny valley. These subtle differences change one’s experience or perception of a large language model’s (LLM) output.

For example, for the drug researcher that wants vast amounts of synthetic data, accuracy at a micro level may be unimportant; for the lawyer trying to grasp legal documentation, accuracy matters a lot. In fact, dropping into the uncanny valley might just be the signal to step back and reassess your expectations.

Shifting our perspective

The uncanny valley of generative AI might be troubling, even something we want to minimize, but it should also remind us of generative AI’s limitations—it should encourage us to rethink our perspective.

There have been some interesting attempts to do that across the industry. One that stands out is Ethan Mollick, a professor at the University of Pennsylvania, who argues that AI shouldn’t be understood as good software but instead as “pretty good people.”

Therefore, our expectations about what generative AI can do and where it’s effective must remain provisional and should be flexible. To a certain extent, this might be one way of overcoming the uncanny valley—by reflecting on our assumptions and expectations, we remove the technology’s power to disturb or confound them.

However, simply calling for a mindset shift isn’t enough. There are various practices and tools that can help. One example is the technique, which we identified in the latest Technology Radar, of getting structured outputs from LLMs. This can be done by either instructing a model to respond in a particular format when prompting or through fine-tuning. Thanks to tools like Instructor, it is getting easier to do that and creates greater alignment between expectations and what the LLM will output. While there’s a chance something unexpected or not quite right might happen, this technique goes some way to addressing that.

There are other techniques too, including retrieval augmented generation as a way of better controlling the “context window.” There are frameworks and tools that can help evaluate and measure the success of such techniques, including Ragas and DeepEval, which are libraries that provide AI developers with metrics for faithfulness and relevance.

Measurement is important, as are relevant guidelines and policies for LLMs, such as LLM guardrails. It’s important to take steps to better understand what’s actually happening inside these models. Completely unpacking these black boxes might be impossible, but tools like Langfuse can help. Doing so may go a long way in reorienting the relationship with this technology, shifting mental models, and removing the possibility of falling into the uncanny valley.

An opportunity, not a flaw

These tools—part of a Cambrian explosion of generative AI tools—can help practitioners rethink generative AI and, hopefully, build better and more responsible products. However, for the wider world, this work will remain invisible. What’s important is exploring how we can evolve toolchains to better control and understand generative AI, even though existing mental models and conceptions of generative AI are a fundamental design problem, not a marginal issue we can choose to ignore.

Ken Mugrage is the principal technologist in the office of the CTO at Thoughtworks. Srinivasan Raguraman is a technical principal at Thoughtworks based in Singapore.

This content was produced by Thoughtworks. It was not written by MIT Technology Review’s editorial staff.

Read more

The UK driverless-car startup Wayve is headed west. The firm’s cars learned to drive on the streets of London. But Wayve has announced that it will begin testing its tech in and around San Francisco as well. And that brings a new challenge: Its AI will need to switch from driving on the left to driving on the right.

As visitors to or from the UK will know, making that switch is harder than it sounds. Your view of the road, how the vehicle turns—it’s all different, says Wayve’s vice president of software, Silvius Rus. Rus himself learned to drive on the left for the first time last year after years in the US. “Even for a human who has driven a long time, it’s not trivial,” he says.

Wayve’s US fleet of Ford Mustang Mach-E’s.
WAYVE

The move to the US will be a test of Wayve’s technology, which the company claims is more general-purpose than what many of its rivals are offering. Wayve’s approach has attracted massive investment—including a $1 billion funding round that broke UK records this May—and partnerships with Uber and online grocery firms such as Asda and Ocado. But it will now go head to head with the heavyweights of the growing autonomous-car industry, including Cruise, Waymo, and Tesla.  

Back in 2022, when I first visited the company’s offices in north London, there were two or three vehicles parked in the building’s auto shop. But on a sunny day this fall, both the shop and the forecourt are full of cars. A billion dollars buys a lot of hardware.

I’ve come for a ride-along. In London, autonomous vehicles can still turn heads. But what strikes me as I sit in the passenger seat of one of Wayve’s Jaguar I-PACE cars isn’t how weird it feels to be driven around by a computer program, but how normal—how comfortable, how safe. This car drives better than I do.

Regulators have not yet cleared autonomous vehicles to drive on London’s streets without a human in the loop. A test driver sits next to me, his hands hovering a centimeter above the wheel as it turns back and forth beneath them. Rus gives a running commentary from the back.

The midday traffic is light, but that makes things harder, says Rus: “When it’s crowded, you tend to follow the car in front.” We steer around roadworks, cyclists, and other vehicles stopped in the middle of the street. It starts to rain. At one point I think we’re on the wrong side of the road. But it’s a one-way street: The car has spotted a sign that I didn’t. We approach every intersection with what feels like deliberate confidence.

At one point a blue car (with a human at the wheel) sticks its nose into the stream of traffic just ahead of us. Urban drivers know this can go two ways: Hesitate and it’s a cue for the other car to pull out; push ahead and you’re telling it to wait its turn. Wayve’s car pushes ahead.

The interaction lasts maybe a second. But it’s the most impressive moment of my ride. Wayve says its model has picked up lots of defensive driving habits like this. “It was our right of way, and the safest approach was to assert that,” says Rus. “It learned to do that; it’s not programmed.”

Learning to drive

Everything that Wayve’s cars do is learned rather than programmed. The company uses different technology from what’s in most other driverless cars. Instead of separate, specialized models trained to handle individual tasks like spotting obstacles or finding a route around them—models that must then be wired up to work together—Wayve uses an approach called end-to-end learning.

This means that Wayve’s cars are controlled by a single large model that learns all the individual tasks needed to drive at once, using camera footage, feedback from test drivers (many of whom are former driving instructors), and a lot of reruns in simulation.

Wayve has argued that this approach makes its driving models more general-purpose. The firm has shown that it can take a model trained on the streets of London and then use that same model to drive cars in multiple UK cities—something that others have struggled to do.

But a move to the US is more than a simple relocation. It rewrites one of the most basic rules of driving—which side of the road to drive on. With Wayve’s single large model, there’s no left-hand-drive module to swap out. “We did not program it to drive on the left,” says Rus. “It’s just seen it enough to think that’s how it needs to drive. Even if there’s no marking on the road, it will still keep to the left.”  

“So how will the model learn to drive on the right? This will be an interesting question for the US.”

Answering that question involves figuring out whether the side of the road it drives on is a deep feature of Wayve’s model—intrinsic to its behavior—or a more superficial one that can be overridden with a little retraining.

Given the adaptability seen in the model so far, Rus believes it will switch to US streets just fine. He cites the way the cars have shown they can adapt to new UK cities, for example. “That gives us confidence in its capability to learn and to drive in new situations,” he says.

Under the hood

But Wayve needs to be certain. As well as testing its cars in San Francisco, Rus and his colleagues are poking around inside their model to find out what makes it tick. “It’s like you’re doing a brain scan and you can see there’s some activity in a certain part of the brain,” he says.

The team presents the model with many different scenarios and watches what parts of it get activated at specific times. One example is an unprotected turn—a turn that crosses traffic going in the opposite direction, without a traffic signal. “Unprotected turns are to the right here and to the left in the US,” says Rus. “So will it see them as similar? Or will it just see right turns as right turns?”

Figuring out why the model behaves as it does tells Wayve what kinds of scenarios require extra help. Using a hyper-detailed simulation tool called PRISM-1 that can reconstruct 3D street scenes from video footage, the company can generate bespoke scenarios and run the model through them over and over until it learns how to handle them. How much retraining might the model need? “I cannot tell you the amount. This is part of our secret sauce,” says Rus. “But it’s a small amount.”

Wayve’s simulation tool, PRISM-1, can reconstruct virtual street scenes from real video footage. Wayve uses the tool to help train its driving model.
WAYVE

The autonomous-vehicle industry is known for hype and overpromising. Within the past year, Cruise laid off hundreds after its cars caused chaos and injury on the streets of San Francisco. Tesla is facing federal investigation after its driver-assistance technology was blamed for multiple crashes, including a fatal collision with a pedestrian. 

But the industry keeps forging ahead. Waymo has said it is now giving 100,000 robotaxi rides a week in San Francisco, Los Angeles, and Phoenix. In China, Baidu claims it is giving some 287,000 rides in a handful of cities, including Beijing and Wuhan. Undaunted by the allegations that Tesla’s driver-assistance technology is unsafe, Elon Musk announced his Cybercab last week with a timeline that would put these driverless concept cars on the road by 2025. 

What should we make of it all? “The competition between robotaxi operators is heating up,” says Crijn Bouman, CEO and cofounder of Rocsys, a startup that makes charging stations for autonomous electric vehicles. “I believe we are close to their ChatGPT moment.”

“The technology, the business model, and the consumer appetite are all there,” Bouman says. “The question is which operator will seize the opportunity and come out on top.”

Others are more skeptical. We need to be very clear what we’re talking about when we talk about autonomous vehicles, says Saber Fallah, director of the Connected Autonomous Vehicle Research Lab at the University of Surrey, UK. Some of Baidu’s robotaxis still require a safety driver behind the wheel, for example. Cruise and Waymo have shown that a fully autonomous service is viable in certain locations. But it took years to train their vehicles to drive specific streets, and extending routes—safely—beyond existing neighborhoods will take time. “We won’t have robotaxis that can drive anywhere anytime soon,” says Fallah.

Fallah takes the extreme view that this won’t happen until all human drivers hand in their licenses. For robotaxis to be safe, they need to be the only vehicles on the road, he says. He thinks today’s driving models are still not good enough to interact with the complex and subtle behaviors of humans. There are just too many edge cases, he says.

Wayve is betting its approach will win out. In the US, it will begin by testing what it calls an advanced driver assistance system, a technology similar to Tesla’s. But unlike Tesla, Wayve plans to sell that technology to a wide range of existing car manufacturers. The idea is to build on this foundation to achieve full autonomy in the next few years. “We’ll get access to scenarios that are encountered by many cars,” says Rus. “The path to full self-driving is easier if you go level by level.”

But cars are just the start, says Rus. What Wayve is in fact building, he says, is an embodied model that could one day control many different types of machines, whether they have wheels, wings, or legs. 

“We’re an AI shop,” he says. “Driving is a milestone, but it’s a stepping stone as well.”

Read more

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Introducing: The AI Hype Index

There’s no denying that the AI industry moves fast. Each week brings a bold new announcement, product release, or lofty claim that pushes the bounds of what we previously thought was possible. Separating AI fact from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Take a look at what made the cut.

The inaugural AI Hype Index makes an appearance in the latest print issue of MIT Technology Review, which is all about the weird and wonderful world of food. If you don’t already, subscribe to receive future copies once they land.

Google DeepMind is making its AI text watermark open source

What’s new: Google DeepMind has developed a tool for identifying AI-generated text and is making it available open source. The tool, called SynthID, is part of a larger family of watermarking tools for generative AI outputs. 

Why it matters: Watermarks have emerged as an important tool to help people determine when something is AI generated, which could help counter harms such as misinformation. But they’re not an all-purpose solution. Read the full story.

—Melissa Heikkilä

Why agriculture is a tough climate problem to solve

It’s a real problem, from a climate perspective at least, that burgers taste good, and so do chicken sandwiches and cheese and just about anything that has butter in it. It’s often very hard to persuade people to change their eating habits.

We could all stand to make some choices that could reduce the emissions associated with the food on our plates. But we’re also going to need to innovate around people’s love for burgers—and fix our food system not just in the kitchen, but on the farm. 

Our climate team James Temple and Casey Crownhart spoke with leaders from agricultural companies Pivot Bio and Rumin8 at our recent Roundtables online event, to hear from them about the problems they’re trying to solve and how they’re doing it. Read the full story.

This story is from The Spark, our weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Russia is conducting a viral disinformation campaign to smear Kamala Harris
Microsoft researchers fear they’re encouraging violent protests after election day. (AP News)
+ Donald Trump and Harris are duking it out on TikTok for the Gen Z vote. (FT $)
+ Marjorie Taylor Greene has been spreading falsehoods about voting machines. (NYT $)

2 Scientific racism is widespread in AI search engine results
Debunked eugenics claims are surfacing on Google, Microsoft and Perplexity’s engines (Wired $)
+ Perplexity wants to ink deals with news publishers in the wake of a legal case. (WSJ $)
+ Why Google’s AI Overviews gets things wrong. (MIT Technology Review)

3 The Federal Aviation Administration has officially approved air taxis 
It’s the first new aircraft category to get the green light in close to 80 years. (WP $)

4 Apple is dramatically cutting production of its Vision Pro headset
And it could cease assembly altogether from next month. (The Information $)
+ Apple recently announced its first film specifically for the device. (Variety $)

5 Nvidia has launched a new Hindi language AI model 
Business is booming for the chip giant in India, which is hungry for AI. (Reuters)
+ This company is building AI for African languages. (MIT Technology Review)

6 China’s Great Firewall now extends to space
Its satellite-delivered broadband will come with a side order of censorship. (IEEE Spectrum)
+ Hong Kong is safe from China’s Great Firewall—for now. (MIT Technology Review)

7 California has a plan for its wood waste
But the well-intentioned projects are up against a major problem.(Bloomberg $)
+ Saving nature doesn’t appear to be a national priority for the USA. (Vox)
+ The quest to build wildfire-resistant homes. (MIT Technology Review)

8 Goodbye passwords, hello passcodes
They’re more secure, sure, but are they easier to remember? (Vox)
+ The end of passwords. (MIT Technology Review)

9 These robots are grape-picking pros 🍇
The harvest window is very short, and machines could help. (Economist $)

10 Vinted is looking to sell more than just clothes
Watch your back, eBay. (FT $)

Quote of the day

“It’s like when an artist has the concept of a painting in their mind, but it can’t be realized unless they have the paints and brushes to make it.”

—G Dan Hutcheson, vice chair of consultancy firm TechInsights, explains why the global semiconductor industry is so reliant on advanced design software to create the latest and greatest chips the Financial Times.

The big story

These artificial snowdrifts protect seal pups from climate change

April 2024

For millennia, during Finland’s blistering winters, wind drove snow into meters-high snowbanks along Lake Saimaa’s shoreline, offering prime real estate from which seals carved cave-like dens to shelter from the elements and raise newborns.

But in recent decades, these snowdrifts have failed to form in sufficient numbers, as climate change has brought warming temperatures and rain in place of snow, decimating the seal population.

For the last 11 years, humans have stepped in to construct what nature can no longer reliably provide. Human-made snowdrifts, built using handheld snowplows, now house 90% of seal pups. They are the latest in a raft of measures that have brought Saimaa’s seals back from the brink of extinction. Read the full story.

—Matthew Ponsford

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Unhappy with clutter in your home? It’s time to pull off an interior optical illusion.
+ Take care of your zippers, and your zippers will take care of you.
+ Don’t be swayed by electronic dupes—they’re rarely worth the savings.
+ Wait: don’t unsend that message!

Read more

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

As a climate reporter, I’m all too aware of the greenhouse-gas emissions that come from food production. And yet, I’m not a vegan, and I do enjoy a good cheeseburger (at least on occasion). 

It’s a real problem, from a climate perspective at least, that burgers taste good, and so do chicken sandwiches and cheese and just about anything that has butter in it. It can be hard to persuade people to change their eating habits, especially since food is tied up in our social lives and our cultures. 

We could all stand to make some choices that could reduce the emissions associated with the food on our plates. But the longer I write about agriculture and climate, the more I think we’re also going to need to innovate around people’s love for burgers—and fix our food system not just in the kitchen, but on the farm. 

If we lump in everything it takes to get food grown, processed, and transported to us, agriculture accounts for between 20% and 35% of annual global greenhouse-gas emissions. (The range is huge because estimates can vary in what they include and how they account for things like land use, the impact of which is tricky to measure.) 

So when it came time to put together our list of 15 Climate Tech Companies to Watch, which we released earlier this month, we knew we wanted to represent the massive challenge that is our food system. 

We ended up choosing two companies in agriculture for this year’s list, Pivot Bio and Rumin8. My colleague James Temple and I spoke with leaders from both these businesses at our recent Roundtables online event, and it was fascinating to hear from them about the problems they’re trying to solve and how they’re doing it. 

Pivot Bio is using microbes to help disrupt the fertilizer industry. Today, applying nitrogen-based fertilizers to fields is basically like putting gas into a leaky gas tank, as Pivot cofounder Karsten Temme put it at the event. 

Plants rely on nitrogen to grow, but they fail to take up a lot of the nitrogen in fertilizers applied in the field. Since fertilizer requires a ton of energy to produce and can wind up emitting powerful greenhouse gases if plants don’t use it, that’s a real problem.

Pivot Bio uses microbes to help get nitrogen from the air into plants, and the company’s current generation of products can help farmers cut fertilizer use by 25%. 

Rumin8 has its sights set on cattle, making supplements that help them emit less methane, a powerful greenhouse gas. Cows have a complicated digestive system that involves multiple stomachs and a whole lot of microbes that help them digest food. Those microbes produce methane that the cows then burp up. “It’s really rude of them,” quipped Matt Callahan, Rumin8’s cofounder and counsel, at the event. 

In part because of the powerful warming effects of methane, beef is among the worst foods for the climate. Beef can account for up to 10 times more greenhouse-gas emissions than poultry, for example. 

Rumin8 makes an additive that can go into the food or water supply of dairy and beef cattle that can help reduce the methane they burp up. The chemical basically helps the cows use that gas as energy instead, so it can boost their growth—a big benefit to farmers. The company has seen methane reductions as high as 90%, depending on how the cow is getting the supplement (effects aren’t as strong for beef cattle, which often don’t have as close contact with farmers and may not get as strong a dose of the supplement over time as dairy cattle do). 

My big takeaway from our discussion, and from researching and picking the companies on our list this year, is that there’s a huge range of work being done to cut emissions from agriculture on the product side. That’s crucial, because I’m personally skeptical that a significant chunk of the world is going to quickly and voluntarily give up all the tasty but emissions-intensive foods that they’re used to. 

That’s not to say individual choices can’t make a difference. I love beans and lentils as much as the next girl, and we could all stand to make choices that cut down our individual climate impact. And it doesn’t have to be all or nothing. Anyone can choose to eat a little bit less beef specifically, and fewer meat and animal products in general (which tend to be more emissions-intensive than plant-based options). Another great strategy is to focus on cutting down your food waste, which not only reduces emissions but also saves you money. 

But with appetites and budgets for beef and other emissions-intensive foods continuing to grow worldwide, I think we’re also going to need to see a whole lot of innovation that helps lower the emissions of existing food products that we all know and love, including beef. 

There’s no one magic solution that’s going to solve our climate problem in agriculture. The key is going to be both shifting diets through individual and community action and adopting new, lower-emissions options that companies bring to the table. 


Now read the rest of The Spark

Related reading

If you missed our Rountables event “Producing Climate-Friendly Food,” you can check out the recording here. And for more details on the businesses we mentioned, read our profiles on Pivot Bio and Rumin8 from our 2024 list of 15 Climate Tech Companies to Watch. 

There are also some fascinating climate stories from the new, food-focused issue of our print magazine: 

grid of batteries, part of an electric car driving down the road, a flame and an inset of PyroThin aerogels

STEPHANIE ARNETT/MIT TECHNOLOGY REVIEW | ASPEN AEROGEL (PYROTHIN,) AUDI (EV)

Another thing

As more EVs hit the roads, there’s a growing concern about battery fires, which are a relatively rare but dangerous occurrence. 

Aspen Aerogels is making super-light materials that can help suppress battery fires, and the company just got a huge boost from the US Department of Energy. Read more about the $670.6 million loan and the details of the technology in my latest story

Keeping up with climate  

Hurricane Milton disrupted the supply of fresh drinking water, so a Florida hospital deployed a machine to harvest it out of the air. (Wired

There may be a huge supply of lithium in an underground brine reservoir in Arkansas. Using this source of the crucial battery metal will require companies to scale up new ways of extracting it. (New York Times)

There’s been a flurry of new deals between Big Tech and the nuclear industry, but Amazon is going one step further with its latest announcement. The company is supporting development of a new project rather than just agreeing to step in once electricity is ready. (Heatmap)
→ Here’s why Microsoft is getting involved in a plan to revive a nuclear reactor at Three Mile Island. (MIT Technology Review)

Japan’s most popular rice is in danger because of rising temperatures. Koshihikari rice has a low tolerance for heat, and scientists are racing to breed new varieties that can handle a changing climate. (New York Times)

There are some pretty straightforward solutions that could slash methane emissions from landfills, including requiring more sites to install gas-capture systems. Landfills are the third-largest source of the powerful greenhouse gas. (Canary Media)

Heat pump sales have slowed in the US and stalled in Europe. The technology is struggling in part because of high interest rates, increasing costs, and misinformation about the appliances. (Washington Post)
→ Here’s everything you need to know about how heat pumps work. (MIT Technology Review)

Read more
1 84 85 86 87 88 2,518