Ice Lounge Media

Ice Lounge Media

Getting crypto out of the 'AOL era' — Sandeep Nailwal

The current state of crypto is akin to the internet’s “America Online” (AOL) era during the late 1990s, when the user experience was clunky, technical, featured limited use cases, and moved at dial-up speeds, according to Polygon co-founder Sandeep Nailwal.

In an interview with Cointelegraph, Nailwal identified several key areas of development to improve user experience, including seamless fiat on- and off-ramps, custody solutions that feature key recovery, and hardware wallets built into mobile devices.

“We are in the dial-up era of the internet where even connecting to the Internet was a tedious task, like you had to be a mini-engineer to be able to connect to the Internet — we are still there in crypto.” —Sandeep Nailwal

“We are probably still in 1998, and it is going to take at least 10 to 15 years to see crypto in its full glory,” the Polygon founder added.

Cryptocurrencies, Internet

While considered revolutionary at the time, the AOL days of the internet featured limited functionality and a high barrier to entry. Source: PC Magazine

The internet took between 30-40 years to achieve mass adoption and began with a limited number of use cases. In the late 1990s, the AOL era of the internet was primarily focused on email and basic web browsing, but today, the internet encompasses the entire economy.

Nailwal said that the current state of crypto is similar, with financial use cases, particularly market speculation, being the core focus of crypto at this time.

However, once the financial use cases have been fully developed and achieved sufficient adoption, crypto adoption will spread to alternative use cases such as decentralized social media, gaming, and other niche sectors, he said.

Related: Security concerns slow crypto payment adoption worldwide — Survey

Being in crypto today is being early to the party

Nailwal pointed out that even the base use case for cryptocurrencies, which is financial, has not been fully developed.

According to a February 2025 report from Bitcoin (BTC) financial services company River, only 4% of individuals worldwide own BTC — which is the original cryptocurrency with the largest market cap and has the most mainstream appeal.

Cryptocurrencies, Internet

Bitcoin’s adoption path. Source: River

The report found that BTC has only achieved about 3% of its total adoption path when institutions, the total addressable market, and proper portfolio allocations are considered.

This small number of BTC holders indicates that crypto mass adoption is still years away and signals that the entire industry is still in the early adopter phase of development.

Magazine: They solved crypto’s janky UX problem — you just haven’t noticed yet

Read more
President Trump is slapping 25% tariffs on all cars imported to the United States, including from our immediate North American neighbors. He’s also placed a 25% tariff on certain parts used to build cars. It’s a decision that will likely supercharge the cost of new and used cars, but it’s also a gift to Tesla, […]
Read more

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

With the recent news that the Atlantic’s editor in chief was accidentally added to a group Signal chat for American leaders planning a bombing in Yemen, many people are wondering: What is Signal? Is it secure? If government officials aren’t supposed to use it for military planning, does that mean I shouldn’t use it either?

The answer is: Yes, you should use Signal, but government officials having top-secret conversations shouldn’t use Signal.

Read on to find out why.

What is Signal?

Signal is an app you can install on your iPhone or Android phone, or on your computer. It lets you send secure texts, images, and phone or video chats with other people or groups of people, just like iMessage, Google Messages, WhatsApp, and other chat apps.

Installing Signal is a two-minute process—again, it’s designed to work just like other popular texting apps.

Why is it a problem for government officials to use Signal?

Signal is very secure—as we’ll see below, it’s the best option out there for having private conversations with your friends on your cell phone.

But you shouldn’t use it if you have a legal obligation to preserve your messages, such as while doing government business, because Signal prioritizes privacy over ability to preserve data. It’s designed to securely delete data when you’re done with it, not to keep it. This makes it uniquely unsuited for following public record laws.

You also shouldn’t use it if your phone might be a target of sophisticated hackers, because Signal can only do its job if the phone it is running on is secure. If your phone has been hacked, then the hacker can read your messages regardless of what software you are running.

This is why you shouldn’t use Signal to discuss classified material or military plans. For military communication your civilian phone is always considered hacked by adversaries, so you should instead use communication equipment that is safer—equipment that is physically guarded and designed to do only one job, making it harder to hack.

What about everyone else?

Signal is designed from bottom to top as a very private space for conversation. Cryptographers are very sure that as long as your phone is otherwise secure, no one can read your messages.

Why should you want that? Because private spaces for conversation are very important. In the US, the First Amendment recognizes, in the right to freedom of assembly, that we all need private conversations among our own selected groups in order to function.

And you don’t need the First Amendment to tell you that. You know, just like everyone else, that you can have important conversations in your living room, bedroom, church coffee hour, or meeting hall that you could never have on a public stage. Signal gives us the digital equivalent of that—it’s a space where we can talk, among groups of our choice, about the private things that matter to us, free of corporate or government surveillance. Our mental health and social functioning require that.

So if you’re not legally required to record your conversations, and not planning secret military operations, go ahead and use Signal—you deserve the privacy.

How do we know Signal is secure?

People often give up on finding digital privacy and end up censoring themselves out of caution. So are there really private ways to talk on our phones, or should we just assume that everything is being read anyway?

The good news is: For most of us who aren’t individually targeted by hackers, we really can still have private conversations.

Signal is designed to ensure that if you know your phone and the phones of other people in your group haven’t been hacked (more on that later), you don’t have to trust anything else. It uses many techniques from the cryptography community to make that possible.

Most important and well-known is “end-to-end encryption,” which means that messages can be read only on the devices involved in the conversation and not by servers passing the messages back and forth.

But Signal uses other techniques to keep your messages private and safe as well. For example, it goes to great lengths to make it hard for the Signal server itself to know who else you are talking to (a feature known as “sealed sender”), or for an attacker who records traffic between phones to later decrypt the traffic by seizing one of the phones (“perfect forward secrecy”).

These are only a few of many security properties built into the protocol, which is well enough designed and vetted for other messaging apps, such as WhatsApp and Google Messages, to use the same one.

Signal is also designed so we don’t have to trust the people who make it. The source code for the app is available online and, because of its popularity as a security tool, is frequently audited by experts.

And even though its security does not rely on our trust in the publisher, it does come from a respected source: the Signal Technology Foundation, a nonprofit whose mission is to “protect free expression and enable secure global communication through open-source privacy technology.” The app itself, and the foundation, grew out of a community of prominent privacy advocates. The foundation was started by Moxie Marlinspike, a cryptographer and longtime advocate of secure private communication, and Brian Acton, a cofounder of WhatsApp.

Why do people use Signal over other text apps? Are other ones secure?

Many apps offer end-to-end encryption, and it’s not a bad idea to use them for a measure of privacy. But Signal is a gold standard for private communication because it is secure by default: Unless you add someone you didn’t mean to, it’s very hard for a chat to accidentally become less secure than you intended.

That’s not necessarily the case for other apps. For example, iMessage conversations are sometimes end-to-end encrypted, but only if your chat has “blue bubbles,” and they aren’t encrypted in iCloud backups by default. Google Messages are sometimes end-to-end encrypted, but only if the chat shows a lock icon. WhatsApp is end-to-end encrypted but logs your activity, including “how you interact with others using our Services.”

Signal is careful not to record who you are talking with, to offer ways to reliably delete messages, and to keep messages secure even in online phone backups. This focus demonstrates the benefits of an app coming from a nonprofit focused on privacy rather than a company that sees security as a “nice to have” feature alongside other goals.

(Conversely, and as a warning, using Signal makes it rather easier to accidentally lose messages! Again, it is not a good choice if you are legally required to record your communication.)

Applications like WhatsApp, iMessage, and Google Messages do offer end-to-end encryption and can offer much better security than nothing. The worst option of all is regular SMS text messages (“green bubbles” on iOS)—those are sent unencrypted and are likely collected by mass government surveillance.

Wait, how do I know that my phone is secure?

Signal is an excellent choice for privacy if you know that the phones of everyone you’re talking with are secure. But how do you know that? It’s easy to give up on a feeling of privacy if you never feel good about trusting your phone anyway.

One good place to start for most of us is simply to make sure your phone is up to date. Governments often do have ways of hacking phones, but hacking up-to-date phones is expensive and risky and reserved for high-value targets. For most people, simply having your software up to date will remove you from a category that hackers target.

If you’re a potential target of sophisticated hacking, then don’t stop there. You’ll need extra security measures, and guides from the Freedom of the Press Foundation and the Electronic Frontier Foundation are a good place to start.

But you don’t have to be a high-value target to value privacy. The rest of us can do our part to re-create that private living room, bedroom, church, or meeting hall simply by using an up-to-date phone with an app that respects our privacy.

Jack Cushman is a fellow of the Berkman Klein Center for Internet and Society and directs the Library Innovation Lab at Harvard Law School Library. He is an appellate lawyer, computer programmer, and former board member of the ACLU of Massachusetts.

Read more

The AI firm Anthropic has developed a way to peer inside a large language model and watch what it does as it comes up with a response, revealing key new insights into how the technology works. The takeaway: LLMs are even stranger than we thought.

The Anthropic team was surprised by some of the counterintuitive workarounds that large language models appear to use to complete sentences, solve simple math problems, suppress hallucinations, and more, says Joshua Batson, a research scientist at the company.

It’s no secret that large language models work in mysterious ways. Few—if any—mass-market technologies have ever been so little understood. That makes figuring out what makes them tick one of the biggest open challenges in science.

But it’s not just about curiosity. Shedding some light on how these models work would expose their weaknesses, revealing why they make stuff up and can be tricked into going off the rails. It would help resolve deep disputes about exactly what these models can and can’t do. And it would show how trustworthy (or not) they really are.

Batson and his colleagues describe their new work in two reports published today. The first presents Anthropic’s use of a technique called circuit tracing, which lets researchers track the decision-making processes inside a large language model step by step. Anthropic used circuit tracing to watch its LLM Claude 3.5 Haiku carry out various tasks. The second (titled “On the Biology of a Large Language Model”) details what the team discovered when it looked at 10 tasks in particular.

“I think this is really cool work,” says Jack Merullo, who studies large language models at Brown University in Providence, Rhode Island, and was not involved in the research. “It’s a really nice step forward in terms of methods.”

Circuit tracing is not itself new. Last year Merullo and his colleagues analyzed a specific circuit in a version of OpenAI’s GPT-2, an older large language model that OpenAI released in 2019. But Anthropic has now analyzed a number of different circuits as a far larger and far more complex model carries out multiple tasks. “Anthropic is very capable at applying scale to a problem,” says Merullo.

Eden Biran, who studies large language models at Tel Aviv University, agrees. “Finding circuits in a large state-of-the-art model such as Claude is a nontrivial engineering feat,” he says. “And it shows that circuits scale up and might be a good way forward for interpreting language models.”

Circuits chain together different parts—or components—of a model. Last year, Anthropic identified certain components inside Claude that correspond to real-world concepts. Some were specific, such as “Michael Jordan” or “greenness”; others were more vague, such as “conflict between individuals.” One component appeared to represent the Golden Gate Bridge. Anthropic researchers found that if they turned up the dial on this component, Claude could be made to self-identify not as a large language model but as the physical bridge itself.

The latest work builds on that research and the work of others, including Google DeepMind, to reveal some of the connections between individual components. Chains of components are the pathways between the words put into Claude and the words that come out.  

“It’s tip-of-the-iceberg stuff. Maybe we’re looking at a few percent of what’s going on,” says Batson. “But that’s already enough to see incredible structure.”

Growing LLMs

Researchers at Anthropic and elsewhere are studying large language models as if they were natural phenomena rather than human-built software. That’s because the models are trained, not programmed.

“They almost grow organically,” says Batson. “They start out totally random. Then you train them on all this data and they go from producing gibberish to being able to speak different languages and write software and fold proteins. There are insane things that these models learn to do, but we don’t know how that happened because we didn’t go in there and set the knobs.”

Sure, it’s all math. But it’s not math that we can follow. “Open up a large language model and all you will see is billions of numbers—the parameters,” says Batson. “It’s not illuminating.”

Anthropic says it was inspired by brain-scan techniques used in neuroscience to build what the firm describes as a kind of microscope that can be pointed at different parts of a model while it runs. The technique highlights components that are active at different times. Researchers can then zoom in on different components and record when they are and are not active.

Take the component that corresponds to the Golden Gate Bridge. It turns on when Claude is shown text that names or describes the bridge or even text related to the bridge, such as “San Francisco” or “Alcatraz.” It’s off otherwise.

Yet another component might correspond to the idea of “smallness”: “We look through tens of millions of texts and see it’s on for the word ‘small,’ it’s on for the word ‘tiny,’ it’s on for the word ‘petite,’ it’s on for words related to smallness, things that are itty-bitty, like thimbles—you know, just small stuff,” says Batson.

Having identified individual components, Anthropic then follows the trail inside the model as different components get chained together. The researchers start at the end, with the component or components that led to the final response Claude gives to a query. Batson and his team then trace that chain backwards.

Odd behavior

So: What did they find? Anthropic looked at 10 different behaviors in Claude. One involved the use of different languages. Does Claude have a part that speaks French and another part that speaks Chinese, and so on?

The team found that Claude used components independent of any language to answer a question or solve a problem and then picked a specific language when it replied. Ask it “What is the opposite of small?” in English, French, and Chinese and Claude will first use the language-neutral components related to “smallness” and “opposites” to come up with an answer. Only then will it pick a specific language in which to reply. This suggests that large language models can learn things in one language and apply them in other languages.

Anthropic also looked at how Claude solved simple math problems. The team found that the model seems to have developed its own internal strategies that are unlike those it will have seen in its training data. Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95.

And yet if you then ask Claude how it worked that out, it will say something like: “I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95.” In other words, it gives you a common approach found everywhere online rather than what it actually did. Yep! LLMs are weird. (And not to be trusted.)

The steps that Claude 3.5 Haiku used to solve a simple math problem were not what Anthropic expected—they’re not the steps Claude claimed it took either.
ANTHROPIC

This is clear evidence that large language models will give reasons for what they do that do not necessarily reflect what they actually did. But this is true for people too, says Batson: “You ask somebody, ‘Why did you do that?’ And they’re like, ‘Um, I guess it’s because I was— .’ You know, maybe not. Maybe they were just hungry and that’s why they did it.”

Biran thinks this finding is especially interesting. Many researchers study the behavior of large language models by asking them to explain their actions. But that might be a risky approach, he says: “As models continue getting stronger, they must be equipped with better guardrails. I believe—and this work also shows—that relying only on model outputs is not enough.”

A third task that Anthropic studied was writing poems. The researchers wanted to know if the model really did just wing it, predicting one word at a time. Instead they found that Claude somehow looked ahead, picking the word at the end of the next line several words in advance.  

For example, when Claude was given the prompt “A rhyming couplet: He saw a carrot and had to grab it,” the model responded, “His hunger was like a starving rabbit.” But using their microscope, they saw that Claude had already hit upon the word “rabbit” when it was processing “grab it.” It then seemed to write the next line with that ending already in place.

This might sound like a tiny detail. But it goes against the common assumption that large language models always work by picking one word at a time in sequence. “The planning thing in poems blew me away,” says Batson. “Instead of at the very last minute trying to make the rhyme make sense, it knows where it’s going.”

“I thought that was cool,” says Merullo. “One of the joys of working in the field is moments like that. There’s been maybe small bits of evidence pointing toward the ability of models to plan ahead, but it’s been a big open question to what extent they do.”

Anthropic then confirmed its observation by turning off the placeholder component for “rabbitness.” Claude responded with “His hunger was a powerful habit.” And when the team replaced “rabbitness” with “greenness,” Claude responded with “freeing it from the garden’s green.”

Anthropic also explored why Claude sometimes made stuff up, a phenomenon known as hallucination. “Hallucination is the most natural thing in the world for these models, given how they’re just trained to give possible completions,” says Batson. “The real question is, ‘How in God’s name could you ever make it not do that?’”

The latest generation of large language models, like Claude 3.5 and Gemini and GPT-4o, hallucinate far less than previous versions, thanks to extensive post-training (the steps that take an LLM trained on the internet and turn it into a usable chatbot). But Batson’s team was surprised to find that this post-training seems to have made Claude refuse to speculate as a default behavior. When it did respond with false information, it was because some other component had overridden the “don’t speculate” component.

This seemed to happen most often when the speculation involved a celebrity or other well-known entity. It’s as if the amount of information available pushed the speculation through, despite the default setting. When Anthropic overrode the “don’t speculate” component to test this, Claude produced lots of false statements about individuals, including claiming that Batson was famous for inventing the Batson principle (he isn’t).

Still unclear

Because we know so little about large language models, any new insight is a big step forward. “A deep understanding of how these models work under the hood would allow us to design and train models that are much better and stronger,” says Biran.

But Batson notes there are still serious limitations. “It’s a misconception that we’ve found all the components of the model or, like, a God’s-eye view,” he says. “Some things are in focus, but other things are still unclear—a distortion of the microscope.”

And it takes several hours for a human researcher to trace the responses to even very short prompts. What’s more, these models can do a remarkable number of different things, and Anthropic has so far looked at only 10 of them.

Batson also says there are big questions that this approach won’t answer. Circuit tracing can be used to peer at the structures inside a large language model, but it won’t tell you how or why those structures formed during training. “That’s a profound question that we don’t address at all in this work,” he says.

But Batson sees this as the start of a new era in which it is possible, at last, to find real evidence for how these models work: “We don’t have to be, like: ‘Are they thinking? Are they reasoning? Are they dreaming? Are they memorizing?’ Those are all analogies. But if we can literally see step by step what a model is doing, maybe now we don’t need analogies.”

Read more

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Inside a romance scam compound—and how people get tricked into being there

Gavesh’s journey had started, seemingly innocently, with a job ad on Facebook promising work he desperately needed.

Instead, he found himself trafficked into a business commonly known as “pig butchering”—a form of fraud in which scammers form romantic or other close relationships with targets online and extract money from them. The Chinese crime syndicates behind the scams have netted billions of dollars, and they have used violence and coercion to force their workers, many of them people trafficked like Gavesh, to carry out the frauds from large compounds, several of which operate openly in the quasi-lawless borderlands of Myanmar.

We spoke to Gavesh and five other workers from inside the scam industry, as well as anti-trafficking experts and technology specialists. Their testimony reveals how global companies, including American social media and dating apps and international cryptocurrency and messaging platforms, have given the fraud business the means to become industrialized. 

By the same token, it is Big Tech that may hold the key to breaking up the scam syndicates—if only these companies can be persuaded or compelled to act. Read the full story.

—Peter Guest & Emily Fishbein

How to save a glacier

There’s a lot we don’t understand about how glaciers move and how soon some of the most significant ones could collapse into the sea. That could be a problem, since melting glaciers could lead to multiple feet of sea-level rise this century, potentially displacing millions of people who live and work along the coasts.

A new group is aiming not only to further our understanding of glaciers but also to look into options to save them if things move toward a worst-case scenario, as my colleague James Temple outlined in his latest story. One idea: refreezing glaciers in place.

The whole thing can sound like science fiction. But once you consider how huge the stakes are, I think it gets easier to understand why some scientists say we should at least be exploring these radical interventions. Read the full story.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

MIT Technology Review Narrated: How tracking animal movement may save the planet

Researchers have long dreamed of creating an Internet of Animals. And they’re getting closer to monitoring 100,000 creatures—and revealing hidden facets of our shared world.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which 
we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Donald Trump has announced 25% tariffs on imported cars and parts
The measures are likely to make new cars significantly more expensive for Americans. (NYT $)
+ Moving car manufacturing operations to the US won’t be easy. (WP $)
+ It’s not just big businesses that will suffer, either. (The Atlantic $)
+ How Trump’s tariffs could drive up the cost of batteries, EVs, and more. (MIT Technology Review)

2 China is developing an AI system to increase its online censorship 
A leaked dataset demonstrates how LLMs could rapidly filter undesirable material. (TechCrunch)

3 Trump may reduce tariffs on China to encourage a TikTok deal
The Chinese-owned company has until April 5 to find a new US owner. (Insider $)
+ The national security concerns surrounding it haven’t gone away, though. (NYT $)

4 OpenAI’s new image generator can ape Studio Ghibli’s distinctive style
Which raises the question of whether the model was trained on Ghibli’s images. (TechCrunch)
+ The tool’s popularity means its rollout to non-paying users has been delayed. (The Verge)
+ The AI lab waging a guerrilla war over exploitative AI. (MIT Technology Review)

5 DOGE planned to dismantle USAID from the beginning
New court filings reveal the department’s ambitions to infiltrate the system. (Wired $)
+ Can AI help DOGE slash government budgets? It’s complex. (MIT Technology Review)

6 Wildfires are getting worse in the southwest of the US
While federal fire spending is concentrated mainly in the west, the risk is rising in South Carolina and Texas too. (WP $)
+ North and South Carolina were recovering from Hurricane Helene when the fires struck. (The Guardian)
+ How AI can help spot wildfires. (MIT Technology Review)

7 A quantum computer has generated—and verified—truly random numbers
Which is good news for cryptographers. (Bloomberg $)
+ Cybersecurity analysts are increasingly worried about the so-called Q-Day. (Wired $)
+ Amazon’s first quantum computing chip makes its debut. (MIT Technology Review)

8 What’s next for weight-loss drugs 💉
Competition is heating up, but will patients be the ones to benefit? (New Scientist $)
+ Drugs like Ozempic now make up 5% of prescriptions in the US. (MIT Technology Review)

9 At least we’ve still got memes
Poking fun at the Trump administration’s decisions is a form of online resistance. (New Yorker $)

10 Can you truly be friends with a chatbot?
People are starting to find out. (Vox)
+ The AI relationship revolution is already here. (MIT Technology Review)

Quote of the day

“I can’t imagine any professional I know committing this egregious a lapse in judgement.”

—A government technology leader tells Fast Company why top Trump officials’ decision to use unclassified messaging app Signal to discuss war plans is so surprising.

The big story

Why one developer won’t quit fighting to connect the US’s grids

September 2024

Michael Skelly hasn’t learned to take no for an answer. For much of the last 15 years, the energy entrepreneur has worked to develop long-haul transmission lines to carry wind power across the Great Plains, Midwest, and Southwest. But so far, he has little to show for the effort.

Skelly has long argued that building such lines and linking together the nation’s grids would accelerate the shift from coal- and natural-gas-fueled power plants to the renewables needed to cut the pollution driving climate change. But his previous business shut down in 2019, after halting two of its projects and selling off interests in three more.

Skelly contends he was early, not wrong, and that the market and policymakers are increasingly coming around to his perspective. After all, the US Department of Energy just blessed his latest company’s proposed line with hundreds of millions in grants. Read the full story.

—James Temple

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Severance’s Adam Scott sure has interesting taste in music. 
+ While we’re not 100% sure if Millie is definitely the world’s oldest cat, one thing we know for sure is that she lives a life of luxury.
+ Hiking trails are covered in beautiful wildflowers right now; just make sure you tread carefully.
+ This is a really charming look at how girls live in America right now.

Read more

Glaciers generally move so slowly you can’t see their progress with the naked eye. (Their pace is … glacial.) But these massive bodies of ice do march downhill, with potentially planet-altering consequences.  

There’s a lot we don’t understand about how glaciers move and how soon some of the most significant ones could collapse into the sea. That could be a problem, since melting glaciers could lead to multiple feet of sea-level rise this century, potentially displacing millions of people who live and work along the coasts.

A new group is aiming not only to further our understanding of glaciers but also to look into options to save them if things move toward a worst-case scenario, as my colleague James Temple outlined in his latest story. One idea: refreezing glaciers in place.

The whole thing can sound like science fiction. But once you consider how huge the stakes are, I think it gets easier to understand why some scientists say we should at least be exploring these radical interventions.

It’s hard to feel very optimistic about glaciers these days. (The Thwaites Glacier in West Antarctica is often called the “doomsday glacier”—not alarming at all!)

Take two studies published just in the last month, for example. The British Antarctic Survey released the most detailed map to date of Antarctica’s bedrock—the foundation under the continent’s ice. With twice as many data points as before, the study revealed that more ice than we thought is resting on bedrock that’s already below sea level. That means seawater can flow in and help melt ice faster, so Antarctica’s ice is more vulnerable than previously estimated.

Another study examined subglacial rivers—streams that flow under the ice, often from subglacial lakes. The team found that the fastest-moving glaciers have a whole lot of water moving around underneath them, which speeds melting and lubricates the ice sheet so it slides faster, in turn melting even more ice.

And those are just two of the most recent surveys. Look at any news site and it’s probably delivered the same gnarly message at some point recently: The glaciers are melting faster than previously realized. (Our site has one, too: “Greenland’s ice sheet is less stable than we thought,” from 2016.) 

The new group is joining the race to better understand glaciers. Arête Glacier Initiative, a nonprofit research organization founded by scientists at MIT and Dartmouth, has already awarded its first grants to researchers looking into how glaciers melt and plans to study the possibility of reversing those fortunes, as James exclusively reported last week.

Brent Minchew, one of the group’s cofounders and an associate professor of geophysics at MIT, was drawn to studying glaciers because of their potential impact on sea-level rise. “But over the years, I became less content with simply telling a more dramatic story about how things were going—and more open to asking the question of what can we do about it,” he says.

Minchew is among the researchers looking into potential plans to alter the future of glaciers. Strategies being proposed by groups around the world include building physical supports to prop them up and installing massive curtains to slow the flow of warm water that speeds melting. Another approach, which will be the focus of Arête, is called basal intervention. It basically involves drilling holes in glaciers, which would allow water flowing underneath the ice to be pumped out and refrozen, hopefully slowing them down.

If you have questions about how all this would work, you’re not alone. These are almost inconceivably huge engineering projects, they’d be expensive, and they’d face legal and ethical questions. Nobody really owns Antarctica, and it’s governed by a huge treaty—how could we possibly decide whether to move forward with these projects?

Then there’s the question of the potential side effects. Just look at recent news from the Arctic Ice Project, which was researching how to slow the melting of sea ice by covering it with substances designed to reflect sunlight away. (Sea ice is different from glaciers, but some of the key issues are the same.) 

One of the project’s largest field experiments involved spreading tiny silica beads, sort of like sand, over 45,000 square feet of ice in Alaska. But after new research revealed that the materials might be disrupting food chains, the organization announced that it’s concluding its research and winding down operations.

Cutting our emissions of greenhouse gases to stop climate change at the source would certainly be more straightforward than spreading beads on ice, or trying to stop a 74,000-square-mile glacier in its tracks. 

But we’re not doing so hot on cutting emissions—in fact, levels of carbon dioxide in the atmosphere rose faster than ever in 2024. And even if the world stopped polluting the atmosphere with planet-warming gases today, things may have already gone too far to save some of the most vulnerable glaciers. 

The longer I cover climate change and face the situation we’re in, the more I understand the impulse to at least consider every option out there, even if it sounds like science fiction. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Read more
1 5 6 7 8 9 2,623