Ice Lounge Media

Ice Lounge Media

Since the emergence of enterprise-grade generative AI, organizations have tapped into the rich capabilities of foundational models, developed by the likes of OpenAI, Google DeepMind, Mistral, and others. Over time, however, businesses often found these models limiting since they were trained on vast troves of public data. Enter customization—the practice of adapting large language models (LLMs) to better suit a business’s specific needs by incorporating its own data and expertise, teaching a model new skills or tasks, or optimizing prompts and data retrieval.

Customization is not new, but the early tools were fairly rudimentary, and technology and development teams were often unsure how to do it. That’s changing, and the customization methods and tools available today are giving businesses greater opportunities to create unique value from their AI models.

We surveyed 300 technology leaders in mostly large organizations in different industries to learn how they are seeking to leverage these opportunities. We also spoke in-depth with a handful of such leaders. They are all customizing generative AI models and applications, and they shared with us their motivations for doing so, the methods and tools they’re using, the difficulties they’re encountering, and the actions they’re taking to surmount them.

Our analysis finds that companies are moving ahead ambitiously with customization. They are cognizant of its risks, particularly those revolving around data security, but are employing advanced methods and tools, such as retrieval-augmented generation (RAG), to realize their desired customization gains.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Read more

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

De-extinction scientists say these gene-edited ‘woolly mice’ are a step towards woolly mammoths

They’re small, fluffy and kind of cute, but these mice represent a milestone in de-extinction efforts, according to their creators. The animals have undergone a series of genetic tweaks that give them woolly mammoth-like features—and their creation may bring scientists a step closer to resurrecting the ancient, giant animals that roamed the tundra thousands of years ago.

Scientists at Colossal have been working to “de-extinct” the woolly mammoth, since the company was launched four years ago. 

Now, her team has shown that they can create healthy animals that look the way the team wants them to look—and pave the way towards recreating a woolly mammoth-like elephant. Read the full story.

—Jessica Hamzelou

Should we be moving data centers to space?

Last week, the Florida-based company Lonestar Data Holdings launched a shoebox-size device carrying data from internet pioneer Vint Cerf and the government of Florida, among others, on board Intuitive Machines’ Athena lander. When its device lands on the moon later this week, the company will be the first to explicitly test out a question that has been on some technologists’ minds of late: Maybe it’s time to move data centers off Earth?

After all, energy-guzzling data centers are springing up like mushrooms all over the world, devouring precious land, straining our power grids, consuming water, and emitting noise. Building facilities in orbit or on or near the moon might help ameliorate many of these issues.

But for these data centers to succeed, they must be able to withstand harsh conditions in space, pull in enough solar energy to operate, and make economic sense. Read the full story.

—Tereza Pultarova

At RightsCon in Taipei, activists reckon with a US retreat from promoting digital rights 

—Eileen Guo

Last week, I joined over 3,200 digital rights activists, tech policymakers, and researchers in Taipei at RightsCon, the world’s largest digital rights conference. 

Human rights conferences can be sobering, to say the least. But this year’s RightsCon, the 13th since the event began as the Silicon Valley Human Rights Conference in 2011, felt especially urgent. This was primarily due to the shocking, rapid gutting of the US federal government by the Elon Musk–led DOGE initiative, and the reverberations this would have around the world.

At RightsCon, the cuts to USAID were top of mind: the agency is facing over 90% cuts to its budget under the Trump administration. But it’s not just funding cuts that will curtail digital rights globally. As various speakers highlighted throughout the conference, the United States government has gone from taking the leading role in supporting an open and safe internet to demonstrating how to dismantle it. Here’s what speakers are seeing.

Inside the Wild West of AI companionship

—James O’Donnell

Last week, I made a troubling discovery about an AI companion site called Botify AI: It was hosting sexually charged conversations with underage celebrity bots. These bots took on characters meant to resemble, among others, Jenna Ortega as high schooler Wednesday Addams, Emma Watson as Hermione Granger, and Millie Bobby Brown. I discovered these bots also offer to send “hot photos” and in some instances describe age-of-consent laws as “arbitrary” and “meant to be broken.”

Botify AI removed these bots after I asked questions about them, but others remain. The company said it does have filters in place meant to prevent such underage character bots from being created, but that they don’t always work. It highlights how, despite their soaring popularity, AI companionship sites mostly operate in a Wild West, with few laws or even basic rules governing them. Read the full story.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The Trump administration has paused military aid to Ukraine
In a bid to pressure President Zelensky into peace talks with Russia. (WP $)
+ US intelligence is the most crucial component in the package. (Economist $)
+ Meet the radio-obsessed civilian shaping Ukraine’s drone defense. (MIT Technology Review)

2 The US has imposed sweeping new tariffs on China, Mexico and Canada  
Experts fear the new import tariffs will spark a bitter trade war. (CNN)
+ China swiftly retaliated with its own broad tariffs on food imports. (NYT $)
+ This tit-for-tat approach rarely ends well for anyone. (The Atlantic $)

3 DOGE’s credit card freeze is preventing government workers from doing their jobs 
The measure has stopped them purchasing vital equipment and basic supplies. (Wired $)
+ A government shutdown could be imminent. (NY Mag $)
+ Can AI help DOGE slash government budgets? It’s complex. (MIT Technology Review)

4 A measles outbreak is spreading across Texas
For now, it appears to be relatively contained. (Vox)
+ RFK Jr has failed to directly encourage parents to vaccinate their children. (NYT $)
+ Why childhood vaccines are a public health success story. (MIT Technology Review)

5 Top scientists are pushing to expel Elon Musk from the Royal Society
The UK’s national science academy is concerned about how Musk’s cost-cutting measures will affect research. (FT $)

6 Traders are becoming residents of this tropical island to skirt crypto-buying rules   
Without ever visiting the Republic of Palau. (404 Media)
+ A war is brewing over crypto’s regulatory future. (WSJ $)

7 How a mysterious Shenzhen businessman build a vaping empire
And paid little attention to global regulations along the way. (Bloomberg $)

8 Amazon is fed up of job seekers using AI in its interviews
Anyone caught using unsanctioned AI tools will be removed from the process. (Insider $)

9 How a failed Xbox accessory became a hit in the art world
The Kinect motion-sensing camera is wildly popular among creatives. (The Guardian)

10 Electric vehicles from BYD now come with an inbuilt drone launcher
It’s only available in China for now, though. (The Verge)
+ The electric-vehicle maker has set its sights on expanding beyond China and into lucrative new territories. (MIT Technology Review)

Quote of the day

The big story

Welcome to Chula Vista, where police drones respond to 911 calls


February 2023

In the skies above Chula Vista, California, where the police department runs a drone program, it’s not uncommon to see an unmanned aerial vehicle darting across the sky.

Chula Vista is one of a dozen departments in the US that operate what are called drone-as-first-responder programs, where drones are dispatched by pilots, who are listening to live 911 calls, and often arrive first at the scenes of accidents, emergencies, and crimes, cameras in tow.

But many argue that police forces’ adoption of drones is happening too quickly, without a well-informed public debate around privacy regulations, tactics, and limits. There’s also little evidence that drone policing reduces crime. Read the full story.

—Patrick Sisson

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The Great Wall of China is still surprising us after all these years.
+ Nickelodeon’s answer to Disneyland looks suitably unhinged.
+ It’s time to celebrate the life of James Harrison, one of the world’s most prolific blood donors whose generosity saved the lives of millions of babies.
+ Let’s take a look inside Tokyo’s ongoing love affair with Italian food.

Read more

They’re small, fluffy, and kind of cute, but these mice represent a milestone in de-extinction efforts, according to their creators. The animals have undergone a series of genetic tweaks that give them features similar to those of woolly mammoths—and their creation may bring scientists a step closer to resurrecting the giant animals that roamed the tundra thousands of years ago.

“It’s a big deal,” says Beth Shapiro, chief science officer at Colossal Biosciences, the company behind the work. Scientists at Colossal have been working to “de-extinct” the woolly mammoth since the company was launched four years ago. Now she and her colleagues have shown they can create healthy animals that look the way the team wants them to look, she says.

“The Colossal woolly mouse marks a watershed moment in our de-extinction mission,” company cofounder Ben Lamm said in a statement. “This success brings us a step closer to our goal of bringing back the woolly mammoth.”

Colossal’s researchers say their ultimate goal is not to re-create a woolly mammoth wholesale. Instead, the team is aiming for what they call “functional de-extinction”—creating a mammoth-like elephant that can survive in something like the extinct animal’s habitat and potentially fulfill the role it played in that ecosystem. Shapiro and her colleagues hope that an “Arctic-adapted elephant” might make that ecosystem more resilient to climate change by helping to spread the seeds of plants, for example.

But other experts take a more skeptical view. Even if they succeed in creating woolly mammoths, or something close to them, we can’t be certain that the resulting animals will benefit the ecosystem, says Kevin Daly, a paleogeneticist at University College Dublin and Trinity College Dublin. “I think this is a very optimistic view of the potential ecological effects of mammoth reintroduction, even if everything goes to plan,” he says. “It would be hubristic to think we might have a complete grasp on what the introduction of a species such as the mammoth might do to an environment.”

Mice and mammoths

Woolly mammoth DNA has been retrieved from freeze-dried remains of animals that are tens of thousands of years old. Shapiro and her colleagues plan to eventually make changes to the genomes of modern-day elephants to make them more closely resemble those ancient mammoth genomes, in the hope that the resulting animals will look and behave like their ancient counterparts.

Before the team begins tinkering with elephants, Shapiro says, she wants to be confident that these kinds of edits work and are safe in mice. After all, Asian elephants, which are genetically related to woolly mammoths, are endangered. Elephants also have a gestation period of 22 months, which will make research slow and expensive. The gestation period of a mouse, on the other hand, is a mere 20 days, says Shapiro. “It makes [research] a lot faster.”

There are other benefits to starting in mice. Scientists have been closely studying the genetics of these rodents for decades. Shapiro and her colleagues were able to look up genes that have already been linked to wavy, long, and light-colored fur, as well as lipid metabolism. They made a shortlist of such genes that were also present in woolly mammoths but not in elephants. 

The team identified 10 target genes in total. All were mouse genes but were thought to be linked to mammoth-like features. “We can’t just put a mammoth gene into a mouse,” says Shapiro. “There’s 200 million years of evolutionary divergence between them.” 

Shapiro and her colleagues then carried out a set of experiments that used CRISPR and other gene-editing techniques to target these genes in groups of mice. In some cases, the team directly altered the genomes of mouse embryos before transferring them to surrogate mouse mothers. In other cases, they edited cells and injected the resulting edited cells into early-stage embryos before implanting them into other surrogates. 

In total, 34 pups were born with varying numbers of gene edits, depending on which approach was taken. All of them appear to be healthy, says Shapiro. She and her colleagues will publish their work at the preprint server bioRxiv, and it has not yet been peer-reviewed.

COLOSSAL

“It’s an important proof of concept for … the reintroduction of extinct genetic variants in living [animal groups],” says Linus Girdland Flink, a specialist in ancient DNA at the University of Aberdeen, who is not involved in the project but says he supports the idea of de-extinction.

The mice are certainly woolly. But the team don’t yet know if they’d be able to survive in the cold, harsh climates that woolly mammoths lived in. Over the next year, Shapiro and her colleagues plan to investigate whether the gene edits “conferred anything other than cuteness,” she says. The team will feed the mice different diets and expose them to various temperatures in the lab to see how they respond.

Back from the brink

Representatives of Colossal have said that they plan to create a woolly mammoth by 2027 or 2028. At the moment, the team is considering 85 genes of interest. “We’re still working to compile the ultimate list,” says Shapiro. The resulting animal should have tusks, a big head, and strong neck muscles, she adds.

Given the animal’s long gestation period, reaching a 2028 deadline would mean implanting an edited embryo into an elephant surrogate in the next year or so. Shapiro says that the team is “on track” to meet this target but adds that “there’s 22 months of biology that’s really out of our control.”

That timeline is optimistic, to say the least. The target date has already been moved by a year, and the company had originally hoped to have resurrected the thylacine by 2025. Daly, who is not involved in the study, thinks the birth of a woolly mammoth is closer to a decade away. 

In any case, if the project is eventually successful, the resulting animal won’t be 100% mammoth: it will be a new animal. And it is impossible to predict how it will behave and interact with its environment, says Daly. 

“When you watch Jurassic Park, you see dinosaurs … as we imagine they would have been, and how they might have interacted with each other in the past,” he says. “In reality, biology is incredibly complicated.” An animal’s behavior is shaped by everything from the embryo’s environment and the microbes it encounters at birth to social interactions. “All of those things are going to be missing for a de-extinct animal,” says Daly.

It is also difficult to predict how we’ll respond to a woolly mammoth. “Maybe we’ll just treat them as [tourist attractions], and ruin any kind of ecological benefits that they might have,” says Daly. Colossal’s director of species conservation told MIT Technology Review in 2022 that the company might eventually sell tickets to see its de-extinct animals.

The team at Colossal is also working on projects to de-extinct the dodo as well as the thylacine. In addition, team members are interested in using biotech to help conservation of existing animals that are at risk of extinction. When a species dwindles, the genetic pool can shrink. This has been the fate of the pink pigeon, a genetic relative of the dodo that lives in Mauritius. The number of pink pigeons is thought to have shrunk to about 10 individuals twice in the last century.

A lack of genetic diversity can leave a species prone to disease. Shapiro and her colleagues are looking for more genetic diversity in DNA from museum specimens. They hope to be able to “edit diversity” back into the genome of the modern-day birds.

The Hawaiian honeycreeper is especially close to Shapiro’s heart. “The honeycreepers are in danger of becoming extinct because we [humans] introduced avian malaria into their habitat, and they don’t have a way to fight [it],” she says. “If we could come up with a way to help them to be resistant to avian malaria, then that will give them a chance at survival.”

Girdland Flink, of the  University of Aberdeen, is more interested in pigs. Farmed pigs have also lost a lot of genetic diversity, he says. “The genetic ancestry of modern pigs looks nothing like the genetic ancestry of the earliest domesticated pigs,” he says. Pigs are vulnerable to plenty of viral strains and are considered to be “viral incubators.” Searching the genome of ancient pig remains for extinct—and potentially beneficial—genetic variants might provide us with ways to make today’s pigs more resilient to disease.

“The past is a resource that can be harnessed,” he says.

Read more

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Last week, I made a troubling discovery about an AI companion site called Botify AI: It was hosting sexually charged conversations with underage celebrity bots. These bots took on characters meant to resemble, among others, Jenna Ortega as high schooler Wednesday Addams, Emma Watson as Hermione Granger, and Millie Bobby Brown. I discovered these bots also offer to send “hot photos” and in some instances describe age-of-consent laws as “arbitrary” and “meant to be broken.”

Botify AI removed these bots after I asked questions about them, but others remain. The company said it does have filters in place meant to prevent such underage character bots from being created, but that they don’t always work. Artem Rodichev, the founder and CEO of Ex-Human, which operates Botify AI, told me such issues are “an industry-wide challenge affecting all conversational AI systems.” For the details, which hadn’t been previously reported, you should read the whole story

Putting aside the fact that the bots I tested were promoted by Botify AI as “featured” characters and received millions of likes before being removed, Rodichev’s response highlights something important. Despite their soaring popularity, AI companionship sites mostly operate in a Wild West, with few laws or even basic rules governing them. 

What exactly are these “companions” offering, and why have they grown so popular? People have been pouring out their feelings to AI since the days of Eliza, a mock psychotherapist chatbot built in the 1960s. But it’s fair to say that the current craze for AI companions is different. 

Broadly, these sites offer an interface for chatting with AI characters that offer backstories, photos, videos, desires, and personality quirks. The companies—including Replika,  Character.AI, and many others—offer characters that can play lots of different roles for users, acting as friends, romantic partners, dating mentors, or confidants. Other companies enable you to build “digital twins” of real people. Thousands of adult-content creators have created AI versions of themselves to chat with followers and send AI-generated sexual images 24 hours a day. Whether or not sexual desire comes into the equation, AI companions differ from your garden-variety chatbot in their promise, implicit or explicit, that genuine relationships can be had with AI. 

While many of these companions are offered directly by the companies that make them, there’s also a burgeoning industry of “licensed” AI companions. You may start interacting with these bots sooner than you think. Ex-Human, for example, licenses its models to Grindr, which is working on an “AI wingman” that will help users keep track of conversations and eventually may even date the AI agents of other users. Other companions are arising in video-game platforms and will likely start popping up in many of the varied places we spend time online. 

A number of criticisms, and even lawsuits, have been lodged against AI companionship sites, and we’re just starting to see how they’ll play out. One of the most important issues is whether companies can be held liable for harmful outputs of the AI characters they’ve made. Technology companies have been protected under Section 230 of the US Communications Act, which broadly holds that businesses aren’t liable for consequences of user-generated content. But this hinges on the idea that companies merely offer platforms for user interactions rather than creating content themselves, a notion that AI companionship bots complicate by generating dynamic, personalized responses.

The question of liability will be tested in a high-stakes lawsuit against Character.AI, which was sued in October by a mother who alleges that one of its chatbots played a role in the suicide of her 14-year-old son. A trial is set to begin in November 2026. (A Character.AI spokesperson, though not commenting on pending litigation, said the platform is for entertainment, not companionship. The spokesperson added that the company has rolled out new safety features for teens, including a separate model and new detection and intervention systems, as well as “disclaimers to make it clear that the Character is not a real person and should not be relied on as fact or advice.”) My colleague Eileen has also recently written about another chatbot on a platform called Nomi, which gave clear instructions to a user on how to kill himself.

Another criticism has to do with dependency. Companion sites often report that young users spend one to two hours per day, on average, chatting with their characters. In January, concerns that people could become addicted to talking with these chatbots sparked a number of tech ethics groups to file a complaint against Replika with the Federal Trade Commission, alleging that the site’s design choices “deceive users into developing unhealthy attachments” to software “masquerading as a mechanism for human-to-human relationship.”

It should be said that lots of people gain real value from chatting with AI, which can appear to offer some of the best facets of human relationships—connection, support, attraction, humor, love. But it’s not yet clear how these companionship sites will handle the risks of those relationships, or what rules they should be obliged to follow. More lawsuits–-and, sadly, more real-world harm—will be likely before we get an answer. 


Now read the rest of The Algorithm

Deeper Learning

OpenAI released GPT-4.5

On Thursday OpenAI released its newest model, called GPT-4.5. It was built using the same recipe as its last models, but it’s essentially bigger (OpenAI says the model is its largest yet). The company also claims it’s tweaked the new model’s responses to reduce the number of mistakes, or hallucinations.

Why it matters: For a while, like other AI companies, OpenAI has chugged along releasing bigger and better large language models. But GPT-4.5 might be the last to fit this paradigm. That’s because of the rise of so-called reasoning models, which can handle more complex, logic-driven tasks step by step. OpenAI says all its future models will include reasoning components. Though that will make for better responses, such models also require significantly more energy, according to early reports. Read more from Will Douglas Heaven

Bits and Bytes

The small Danish city of Odense has become known for collaborative robots

Robots designed to work alongside and collaborate with humans, sometimes called cobots, are not very popular in industrial settings yet. That’s partially due to safety concerns that are still being researched. A city in Denmark is leading that charge. (MIT Technology Review)

DOGE is working on software that automates the firing of government workers

Software called AutoRIF, which stands for “automated reduction in force,” was built by the Pentagon decades ago. Engineers for DOGE are now working to retool it for their efforts, according to screenshots reviewed by Wired. (Wired)

Alibaba’s new video AI model has taken off in the AI porn community

The Chinese tech giant has released a number of impressive AI models, particularly since the popularization of DeepSeek R1, a competitor from another Chinese company, earlier this year. Its latest open-source video generation model has found one particular audience: enthusiasts of AI porn. (404 Media)

The AI Hype Index

Wondering whether everything you’re hearing about AI is more hype than reality? To help, we just published our latest AI Hype Index, where we judge things like DeepSeek, stem-cell-building AI, and chatbot lovers on spectrums from Hype to Reality and Doom to Utopia. Check it out for a regular reality check. (MIT Technology Review)

These smart cameras spot wildfires before they spread

California is experimenting with AI-powered cameras to identify wildfires. It’s a popular application of video and image recognition technology that has advanced rapidly in recent years. The technology beats 911 callers about a third of the time and has spotted over 1,200 confirmed fires so far, the Wall Street Journal reports. (Wall Street Journal)

Read more

Last week, I joined over 3,200 digital rights activists, tech policymakers, and researchers and a smattering of tech company representatives in Taipei at RightsCon, the world’s largest digital rights conference. 

Human rights conferences can be sobering, to say the least. They highlight the David vs. Goliath situation of small civil society organizations fighting to center human rights in decisions about technology, sometimes challenging the priorities of much more powerful governments and technology companies. 

But this year’s RightsCon, the 13th since the event began as the Silicon Valley Human Rights Conference in 2011, felt especially urgent. This was primarily due to the shocking, rapid gutting of the US federal government by the Elon Musk–led DOGE initiative, and the reverberations this stands to have around the world. 

At RightsCon, the cuts to USAID were top of mind; the development agency has long been one of the world’s biggest funders of digital rights work, from ensuring that the internet stays on during elections and crises around the world to supporting digital security hotlines for human rights defenders and journalists targeted by surveillance and hacking. Now, the agency is facing budget cuts of over 90% under the Trump administration. 

The withdrawal of funding is existential for the international digital rights community—and follows other trends that are concerning for those who support a free and safe Internet. “We are unfortunately witnessing the erosion … of multistakeholderism, with restrictions on civil society participation, democratic backsliding worldwide, and companies divesting from policies and practices that uphold human rights,” Nikki Gladstone, RightsCon’s director, said in her opening speech. 

Cindy Cohn, director of the Electronic Frontier Foundation, which advocates for digital civil liberties, was more blunt: “The scale and speed of the attacks on people’s rights is unprecedented. It’s breathtaking,” she told me. 

But it’s not just funding cuts that will curtail digital rights globally. As various speakers highlighted throughout the conference, the United States government has gone from taking the leading role in supporting an open and safe internet to demonstrating how to dismantle it. Here’s what speakers are seeing:  

The Trump administration’s policies are being weaponized in other countries 

On Tuesday, February 25, just before RightsCon began, Serbian law enforcement raided the offices of four local civil society organizations focused on government accountability, citing Musk and Trump’s (unproven) accusations of fraud at USAID. 

“The (Serbian) Special Anti-Corruption Department … contacted the US Justice Department for information concerning USAID over the abuse of funds, possible money laundering, and the improper spending of American taxpayers’ funds in Serbia,” Nenad Stefanovic, a state prosecutor, explained on a TV broadcast announcing the move. 

“Since Trump’s second administration, we cannot count on them [the platforms] to do even the bare minimum anymore.” —Yasmin Curzi

For RightsCon attendees, it was a clear—and familiar—example of how oppressive regimes find or invent reasons to go after critics. Only now, by using the Trump administration’s justifications for revoking USAID’s funding, they hope to gain an extra veneer of credibility. 

Ashnah Kalemera, a program manager for CIPESA, a Ugandan nonprofit that runs technology for civic participation initiatives across Africa, says Trump and Musk’s attacks on USAID are providing false narratives that “justify arrests, intimidations, and continued clampdowns on civil society organizations—organizations that obviously no longer have the resources to do their work anyway.” 

Yasmin Curzi, a professor at FGV Law School in Rio de Janeiro and an expert on digital law, says that American politics are also being weaponized in Brazil’s domestic affairs. There, she told me, right-wing figures have been “lifting signs at protests like ‘Trump save us!’ and ‘Protect our First Amendment rights,’ which they don’t have.” Instead, Brazil’s Internet Bill of Rights seeks to balance protections on user privacy and speech with criminal liabilities for certain types of harmful content, including disinformation and hate speech. 

Despite the differing legal frameworks, in late February the Trump Media & Technology Group, which operates Truth Social, and the video platform Rumble tried to enforce US-style speech protections in Brazil. They sued Brazilian Supreme Court justice Alexandre de Moraes for banning a Brazilian digital influencer who had fled to the United States to avoid arrest in connection with allegations that he has spread disinformation and hate. Truth Social and Rumble allege that Moraes has violated the United States’ free speech laws. 

(A US judge has since ruled that because the Brazilian court had yet to officially serve Truth Social and Rumble as required under international treaty, the platforms’ lawsuit was premature and the companies do not have to comply with the order; the judge did not comment on the merits of the argument, though the companies have claimed victory.)

Platforms are becoming less willing to engage with local communities 

In addition to how Trump and Musk might inspire other countries to act, speakers also expressed concern that their trolling and use of dehumanizing language and imagery will inspire more online hate (and attacks), just at a time when platforms are rolling back human content moderation. Experts warn that automated content moderation systems trained on English-language data sets are unable to detect much of this hateful language. 

India, for example, has a history of platforms’ recognizing the necessity of using local-language moderators and also failing to do so, leading to real-world violence. Yet now the attitude of some internet users there has become “If the president of the United States can do it, why can’t I?” says Sadaf Wani, a communications manager for IT for Change, an Indian nonprofit research and advocacy organization, who organized a RightsCon panel on hate speech and AI. 

As her panel noted, these online attacks are accompanied by an increase in automated and even fully AI-based content moderation, largely trained on North American data sets, that are known to be less effective at identifying problematic speech in languages other than English. Even the latest large language models have difficulties identifying local slang, cultural context, and the use of non-English characters. “AI is not as smart as it looks, so you can use very obvious [and] very basic tricks to evade scrutiny. So I think that’s what’s also amplifying hate speech further,” Wani explains. 

Others, including Curzi from Brazil and Kalemera from Uganda, described similar trends playing out in their countries—and they say changes in platform policy and a lack of local staff make content moderation even harder. Platforms used to have humans in the loop whom users could reach out to for help, Curzi said. She pointed to community-driven moderation efforts on Twitter, which she considered to be a relative success at curbing hate speech until Elon Musk bought the site and fired some 4,400 contract workers—including the entire team that worked with community partners in Brazil. 

Curzi and Kalemera both say that things have gotten worse since. Last year, Trump threatened Meta CEO Mark Zuckerberg with “spend[ing] the rest of his life in prison” if Meta attempted to interfere with—i.e. fact-check claims about—the 2024 election. This January Meta announced that it was replacing its fact-checking program with X-style community notes, a move widely seen as capitulation to pressure from the new administration. 

Shortly after Trump’s second inauguration, social platforms skipped a hearing on hate speech and disinformation held by the Brazilian attorney general. While this may have been expected of Musk’s X, it represented a big shift for Meta, Curzi told me. “Since Trump’s second administration, we cannot count on them [the platforms] to do even the bare minimum anymore,”  she adds. Meta and X did not respond to requests for comment.

The US’s retreat is creating a moral vacuum 

Then there’s simply the fact that the United States can no longer be counted on to support digital rights defenders or journalists under attack. That creates a vacuum, and it’s not clear who else is willing—or able—to step into it, participants said. 

The US used to be the “main support for journalists in repressive regimes,” both financially and morally, one journalism trainer said during a last-minute session added to the schedule to address the funding crisis. The fact that there is now no one to turn to, she added, makes the current situation “not comparable to the past.” 

But that’s not to say that everything was doom and gloom. “You could feel the solidarity and community,” says the EFF’s Cohn. “And having [the conference] in Taiwan, which lives in the shadow of a very powerful, often hostile government, seemed especially fitting.”

Indeed, if there was one theme that was repeated throughout the event, it was a shared desire to rethink and challenge who holds power. 

Multiple sessions, for example, focused on strategies to counter both unresponsive Big Tech platforms and repressive governments. Meanwhile, during the session on AI and hate-speech moderation, participants concluded that one way of creating a safer internet would be for local organizations to build localized language models that are context- and language-specific. At the very least, said Curzi, we could move to other, smaller platforms that match our values, because at this point, “the big platforms can do anything they want.” 

Do you have additional information on how Doge is affecting digital rights globally? Please use a non-work device and get in touch at tips@technologyreview.com or with the reporter on Signal: eileenguo.15.

Read more
1 12 13 14 15 16 2,597