The first clinical trial of a therapy bot that uses generative AI suggests it was as effective as human therapy for participants with depression, anxiety, or risk for developing eating disorders. Even so, it doesn’t give a go-ahead to the dozens of companies hyping such technologies while operating in a regulatory gray area.
A team led by psychiatric researchers and psychologists at the Geisel School of Medicine at Dartmouth College built the tool, called Therabot, and the results were published on March 27 in the New England Journal of Medicine. Many tech companies have built AI tools for therapy, promising that people can talk with a bot more frequently and cheaply than they can with a trained therapist—and that this approach is safe and effective.
Many psychologists and psychiatrists have shared the vision, noting that fewer than half of people with a mental disorder receive therapy, and those who do might get only 45 minutes per week. Researchers have tried to build tech so that more people can access therapy, but they have been held back by two things.
One, a therapy bot that says the wrong thing could result in real harm. That’s why many researchers have built bots using explicit programming: The software pulls from a finite bank of approved responses (as was the case with Eliza, a mock-psychotherapist computer program built in the 1960s). But this makes them less engaging to chat with, and people lose interest. The second issue is that the hallmarks of good therapeutic relationships—shared goals and collaboration—are hard to replicate in software.
In 2019, as early large language models like OpenAI’s GPT were taking shape, the researchers at Dartmouth thought generative AI might help overcome these hurdles. They set about building an AI model trained to give evidence-based responses. They first tried building it from general mental-health conversations pulled from internet forums. Then they turned to thousands of hours of transcripts of real sessions with psychotherapists.
“We got a lot of ‘hmm-hmms,’ ‘go ons,’ and then ‘Your problems stem from your relationship with your mother,’” said Michael Heinz, a research psychiatrist at Dartmouth College and Dartmouth Health and first author of the study, in an interview. “Really tropes of what psychotherapy would be, rather than actually what we’d want.”
Dissatisfied, they set to work assembling their own custom data sets based on evidence-based practices, which is what ultimately went into the model. Many AI therapy bots on the market, in contrast, might be just slight variations of foundation models like Meta’s Llama, trained mostly on internet conversations. That poses a problem, especially for topics like disordered eating.
“If you were to say that you want to lose weight,” Heinz says, “they will readily support you in doing that, even if you will often have a low weight to start with.” A human therapist wouldn’t do that.
To test the bot, the researchers ran an eight-week clinical trial with 210 participants who had symptoms of depression or generalized anxiety disorder or were at high risk for eating disorders. About half had access to Therabot, and a control group did not. Participants responded to prompts from the AI and initiated conversations, averaging about 10 messages per day.
Participants with depression experienced a 51% reduction in symptoms, the best result in the study. Those with anxiety experienced a 31% reduction, and those at risk for eating disorders saw a 19% reduction in concerns about body image and weight. These measurements are based on self-reporting through surveys, a method that’s not perfect but remains one of the best tools researchers have.
These results, Heinz says, are about what one finds in randomized control trials of psychotherapy with 16 hours of human-provided treatment, but the Therabot trial accomplished it in about half the time. “I’ve been working in digital therapeutics for a long time, and I’ve never seen levels of engagement that are prolonged and sustained at this level,” he says.
Jean-Christophe Bélisle-Pipon, an assistant professor of health ethics at Simon Fraser University who has written about AI therapy bots but was not involved in the research, says the results are impressive but notes that just like any other clinical trial, this one doesn’t necessarily represent how the treatment would act in the real world.
“We remain far from a ‘greenlight’ for widespread clinical deployment,” he wrote in an email.
One issue is the supervision that wider deployment might require. During the beginning of the trial, Heinz says, he personally oversaw all the messages coming in from participants (who consented to the arrangement) to watch out for problematic responses from the bot. If therapy bots needed this oversight, they wouldn’t be able to reach as many people.
I asked Heinz if he thinks the results validate the burgeoning industry of AI therapy sites.
“Quite the opposite,” he says, cautioning that most don’t appear to train their models on evidence-based practices like cognitive behavioral therapy, and they likely don’t employ a team of trained researchers to monitor interactions. “I have a lot of concerns about the industry and how fast we’re moving without really kind of evaluating this,” he adds.
When AI sites advertise themselves as offering therapy in a legitimate, clinical context, Heinz says, it means they fall under the regulatory purview of the Food and Drug Administration. Thus far, the FDA has not gone after many of the sites. If it did, Heinz says, “my suspicion is almost none of them—probably none of them—that are operating in this space would have the ability to actually get a claim clearance”—that is, a ruling backing up their claims about the benefits provided.
Bélisle-Pipon points out that if these types of digital therapies are not approved and integrated into health-care and insurance systems, it will severely limit their reach. Instead, the people who would benefit from using them might seek emotional bonds and therapy from types of AI not designed for those purposes (indeed, new research from OpenAI suggests that interactions with its AI models have a very real impact on emotional well-being).
“It is highly likely that many individuals will continue to rely on more affordable, nontherapeutic chatbots—such as ChatGPT or Character.AI—for everyday needs, ranging from generating recipe ideas to managing their mental health,” he wrote.
Stop me if you’ve heard this one before: A tech company accumulates a ton of user data, hoping to figure out a business model later. That business model never arrives, the company goes under, and the data is in the wind.
The latest version of that story emerged on March 24, when the onetime genetic testing darling 23andMe filed for bankruptcy. Now the fate of 15 million people’s genetic data rests in the hands of a bankruptcy judge. At a hearing on March 26, the judge gave 23andMe permission to seek offers for its users’ data. But, there’s still a small chance of writing a better ending for users.
After the bankruptcy filing, the immediate take from policymakers and privacy advocates was that 23andMe users should delete their accounts to prevent genetic data from falling into the wrong hands. That’s good advice for the individual user (and you can read how to do so here). But the reality is most people won’t do it. Maybe they won’t see the recommendations to do so. Maybe they don’t know why they should be worried. Maybe they have long since abandoned an account that they don’t even remember exists. Or maybe they’re just occupied with the chaos of everyday life.
This means the real value of this data comes from the fact that people have forgotten about it. Given 23andMe’s meager revenue—fewer than 4% of people who took tests pay for subscriptions—it seems inevitable that the new owner, whoever it is, will have to find some new way to monetize that data.
This is a terrible deal for users who just wanted to learn a little more about themselves or their ancestry. Because genetic data is forever. Contact information can go stale over time: you can always change your password, your email, your phone number, or even your address. But a bad actor who has your genetic data—whether a cybercriminal selling it to the highest bidder, a company building a profile of your future health risk, or a government trying to identify you—will have it tomorrow and the next day and all the days after that.
Users with exposed genetic data are not only vulnerable to harm today; they’re vulnerable to exploits that might be developed in the future.
While 23andMe promises that it will not voluntarily share data with insurance providers, employers, or public databases, its new owner could unwind those promises at any time with a simple change in terms.
In other words: If a bankruptcy court makes a mistake authorizing the sale of 23andMe’s user data, that mistake is likely permanent and irreparable.
All this is possible because American lawmakers have neglected to meaningfully engage with digital privacy for nearly a quarter-century. As a result, services are incentivized to make flimsy, deceptive promises that can be abandoned at a moment’s notice. And the burden falls on users to keep track of it all, or just give up.
Here, a simple fix would be to reverse that burden. A bankruptcy court could require that users individually opt in before their genetic data can be transferred to 23andMe’s new owners, regardless of who those new owners are. Anyone who didn’t respond or who opted out would have the data deleted.
Bankruptcy proceedings involving personal data don’t have to end badly. In 2000, the Federal Trade Commission settled with the bankrupt retailer ToySmart to ensure that its customer data could not be sold as a stand-alone asset, and that customers would have to affirmatively consent to unexpected new uses of their data. And in 2015, the FTC intervened in the bankruptcy of RadioShack to ensure that it would keep its promises never to sell the personal data of its customers. (RadioShack eventually agreed to destroy it.)
The ToySmart case also gave rise to the role of the consumer privacy ombudsman. Bankruptcy judges can appoint an ombuds to help the court consider how the sale of personal data might affect the bankruptcy estate, examining the potential harms or benefits to consumers and any alternatives that might mitigate those harms. The U.S. Trustee has requested the appointment of an ombuds in this case. While scholars have called for the role to have more teeth and for the FTC and states to intervene more often, a framework for protecting personal data in bankruptcy is available. And ultimately, the bankruptcy judge has broad power to make decisions about how (or whether) property in bankruptcy is sold.
Here, 23andMe has a more permissive privacy policy than ToySmart or RadioShack. But the risks incurred if genetic data falls into the wrong hands or is misused are severe and irreversible. And given 23andMe’s failure to build a viable business model from testing kits, it seems likely that a new business would use genetic data in ways that users wouldn’t expect or want.
An opt-in requirement for genetic data solves this problem. Genetic data (and other sensitive data) could be held by the bankruptcy trustee and released as individual users gave their consent. If users failed to opt in after a period of time, the remaining data would be deleted. This would incentivize 23andMe’s new owners to earn user trust and build a business that delivers value to users, instead of finding unexpected ways to exploit their data. And it would impose virtually no burden on the people whose genetic data is at risk: after all, they have plenty more DNA to spare.
Consider the alternative. Before 23andMe went into bankruptcy, its then-CEO made two failed attempts to buy it, at reported valuations of $74.7 million and $12.1 million. Using the higher offer, and with 15 million users, that works out to a little under $5 per user. Is it really worth it to permanently risk a person’s genetic privacy just to add a few dollars in value to the bankruptcy estate?
Of course, this raises a bigger question: Why should anyone be able to buy the genetic data of millions of Americans in a bankruptcy proceeding? The answer is simple: Lawmakers allow them to. Federal and state inaction allows companies to dissolve promises about protecting Americans’ most sensitive data at a moment’s notice. When 23andMe was founded, in 2006, the promise was that personalized health care was around the corner. Today, 18 years later, that era may really be almost here. But with privacy laws like ours, who would trust it?
Keith Porcaro is the Rueben Everett Senior Lecturing Fellow at Duke Law School.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
Anthropic can now track the bizarre inner workings of a large language model
The news: The AI firm Anthropic has developed a way to peer inside a large language model and watch what it does as it comes up with a response, revealing key new insights into how the technology works. The takeaway: LLMs are even stranger than we thought.
Why it matters: It’s no secret that large language models work in mysterious ways. Shedding some light on how they work would expose their weaknesses, revealing why they make stuff up and can be tricked into going off the rails. It would help resolve deep disputes about exactly what these models can and can’t do. And it would show how trustworthy (or not) they really are. Read the full story.
—Will Douglas Heaven
What is Signal? The messaging app, explained.
With the recent news that the Atlantic’s editor in chief was accidentally added to a group Signal chat for American leaders planning a bombing in Yemen, many people are wondering: What is Signal? Is it secure? If government officials aren’t supposed to use it for military planning, does that mean I shouldn’t use it either?
The answer is: Yes, you should use Signal, but government officials having top-secret conversations shouldn’t use Signal. Read the full story to find out why.
—Jack Cushman
This story is part of our MIT Technology Review Explains series, in which our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more of them here.
“Spare” living human bodies might provide us with organs for transplantation
—Jessica Hamzelou
This week, MIT Technology Review published a piece on bodyoids—living bodies that cannot think or feel pain. In the piece, a trio of scientists argue that advances in biotechnology will soon allow us to create “spare” human bodies that could be used for research, or to provide organs for donation.
If you find your skin crawling at this point, you’re not the only one. It’s a creepy idea, straight from the more horrible corners of science fiction. But bodyoids could be used for good. And if they are truly unaware and unable to think, the use of bodyoids wouldn’t cross “most people’s ethical lines,” the authors argue.
I’m not so sure. Read the full story.
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 A judge has ordered Trump’s officials to preserve their secret Signal chat
While officials are required by law to keep chats detailing government business, Signal’s messages can be set to auto-disappear. (USA Today)
+ The conversation detailed an imminent attack against Houthi rebels in Yemen. (The Hill)
+ A government accountability group has sued the agencies involved. (Reuters)
+ The officials involved in the chat appear to have public Venmo accounts. (Wired $)
2 The White House is prepared to cut up to 50% of agency staff
But the final cuts could end up exceeding even that. (WP $)
+ The sweeping cuts could threaten vital US statistics, too. (FT $)
+ Can AI help DOGE slash government budgets? It’s complex. (MIT Technology Review)
3 OpenAI is struggling to keep up with demand for ChatGPT’s image generation
The fervor around its Studio Ghibli pictures has sent its GPUs into overdrive. (The Verge)
+ Ghibli’s founder is no fan of AI art. (404 Media)
+ Four ways to protect your art from AI. (MIT Technology Review)
4 Facebook is pivoting back towards friends and family
Less news, fewer posts from people you don’t know. (NYT $)
+ A new tab shows purely updates from friends, with no other recommendations. (Insider $)
5 Africa is set to build its first AI factory
A specialized powerhouse for AI computing, to be precise. (Rest of World)
+ What Africa needs to do to become a major AI player. (MIT Technology Review)
6 A TikTok network spread Spanish-language immigration misinformation
Including clips of the doctored voices of well-known journalists. (NBC News)
7 Your TV is desperate for your data
Streamers are scrambling around for new ways to make money off the information they gather on you. (Vox)
8 This startup extracts rare earth oxides from industrial magnets
It’s a less intrusive way of accessing minerals vital to EV and wind turbine production. (FT $)
+ The race to produce rare earth elements. (MIT Technology Review)
9 NASA hopes to launch its next Starliner flight as soon as later this year
After its latest mission stretched from a projected eight days to nine months. (Reuters)
+ Europe is finally getting serious about commercial rockets. (MIT Technology Review)
10 The Sims has been the world’s favorite life simulation game for 25 years
But a new Korean game is both more realistic and multicultural. (Bloomberg $)
Quote of the day
“It’s like, can you tell the difference between a person and a person-shaped sock puppet that is holding up a sign saying, ‘I am a sock puppet’?”
—Laura Edelson, a computer science professor at Northeastern University, is skeptical about brands’ abilities to ensure their ads are being shown to real humans and not bots, she tells the Wall Street Journal.
The big story
The race to fix space-weather forecasting before next big solar storm hits

April 2024
As the number of satellites in space grows, and as we rely on them for increasing numbers of vital tasks on Earth, the need to better predict stormy space weather is becoming more and more urgent.
Scientists have long known that solar activity can change the density of the upper atmosphere. But it’s incredibly difficult to precisely predict the sorts of density changes that a given amount of solar activity would produce.
Now, experts are working on a model of the upper atmosphere to help scientists to improve their models of how solar activity affects the environment in low Earth orbit. If they succeed, they’ll be able to keep satellites safe even amid turbulent space weather, reducing the risk of potentially catastrophic orbital collisions. Read the full story.
—Tereza Pultarova
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ This is very cool—a nearly-infinite virtual museum entirely generated from Wikipedia.
+ How to let go of that grudge you’ve been harboring (you know the one)
+ If your social media feeds have been plagued by hot men making bad art, you’re not alone.
+ It’s Friday, so enjoy this 1992 recording of a very fresh-faced Pearl Jam.
This week, MIT Technology Review published a piece on bodyoids—living bodies that cannot think or feel pain. In the piece, a trio of scientists argue that advances in biotechnology will soon allow us to create “spare” human bodies that could be used for research, or to provide organs for donation.
If you find your skin crawling at this point, you’re not the only one. It’s a creepy idea, straight from the more horrible corners of science fiction. But bodyoids could be used for good. And if they are truly unaware and unable to think, the use of bodyoids wouldn’t cross “most people’s ethical lines,” the authors argue. I’m not so sure.
Either way, there’s no doubt that developments in science and biotechnology are bringing us closer to the potential reality of bodyoids. And the idea is already stirring plenty of ethical debate and controversy.
One of the main arguments made for bodyoids is that they could provide spare human organs. There’s a huge shortage of organs for transplantation. More than 100,000 people in the US are waiting for a transplant, and 17 people on that waiting list die every day. Human bodyoids could serve as a new source.
Scientists are working on other potential solutions to this problem. One approach is the use of gene-edited animal organs. Animal organs don’t typically last inside human bodies—our immune systems will reject them as “foreign.” But a few companies are creating pigs with a series of gene edits that make their organs more acceptable to human bodies.
A handful of living people have received gene-edited pig organs. David Bennett Sr. was the first person to get a gene-edited pig heart, in 2022, and Richard Slayman was the first to get a kidney, in early 2024. Unfortunately, both men died around two months after their surgery.
But Towana Looney, the third living person to receive a gene-edited pig kidney, has been doing well. She had her transplant surgery in late November of last year. “I am full of energy. I got an appetite I’ve never had in eight years,” she said at the time. “I can put my hand on this kidney and feel it buzzing.” She returned home in February.
At least one company is taking more of a bodyoid-like approach. Renewal Bio, a biotech company based in Israel, hopes to grow “embryo-stage versions of people” for replacement organs.
Their approach is based on advances in the development of “synthetic embryos.” (I’m putting that term in quotation marks because, while it’s the simplest descriptor of what they are, a lot of scientists hate the term.)
Embryos start with the union of an egg cell and a sperm cell. But scientists have been working on ways to make embryos using stem cells instead. Under the right conditions, these cells can divide into structures that look a lot like a typical embryo.
Scientists don’t know how far these embryo-like structures will be able to develop. But they’re already using them to try to get cows and monkeys pregnant.
And no one really knows how to think about synthetic human embryos. Scientists don’t even really know what to call them. Rules stipulate that typical human embryos may be grown in the lab for a maximum of 14 days. Should the same rules apply to synthetic ones?
The very existence of synthetic embryos is throwing into question our understanding of what a human embryo even is. “Is it the thing that is only generated from the fusion of a sperm and an egg?” Naomi Moris, a developmental biologist at the Crick Institute in London, said to me a couple of years ago. “Is it something to do with the cell types it possesses, or the [shape] of the structure?”
The authors of the new MIT Technology Review piece also point out that such bodyoids could also help speed scientific and medical research.
At the moment, most drug research must be conducted in lab animals before clinical trials can start. But nonhuman animals may not respond the same way people do, and the vast majority of treatments that look super-promising in mice fail in humans. Such research can feel like a waste of both animal lives and time.
Scientists have been working on solutions to these problems, too. Some are creating “organs on chips”—miniature collections of cells organized on a small piece of polymer that may resemble full-size organs and can be used to test the effects of drugs.
Others are creating digital representations of human organs for the same purpose. Such digital twins can be extensively modeled, and can potentially be used to run clinical trials in silico.
Both of these approaches seem somehow more palatable to me, personally, than running experiments on a human created without the capacity to think or feel pain. The idea reminds me of the recent novel Tender Is the Flesh by Agustina Bazterrica, in which humans are bred for consumption. In the book, their vocal cords are removed so that others do not have to hear them scream.
When it comes to real-world biotechnology, though, our feelings about what is “acceptable” tend to shift. In vitro fertilization was demonized when it was first developed, for instance, with opponents arguing that it was “unnatural,” a “perilous insult,” and “the biggest threat since the atom bomb.” It is estimated that more than 12 million people have been born through IVF since Louise Brown became the first “test tube baby” 46 years ago. I wonder how we’ll all feel about bodyoids 46 years from now.
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
Elon Musk has used the AI hype to effectively save his app from financial oblivion.
France’s state-owned bank says it will spend 25 million euros ($27 million) buying cryptocurrencies that support local crypto and blockchain projects.
Bpifrance said in a March 27 press release that it would back newly formed projects “with a strong French footprint” where it will receive tokens in return for its investment and will look to fund decentralized finance (DeFi), staking, tokenization and artificial intelligence.
It added that the plan, supported by the French Ministry of Economy and Finance, was to “promote emerging technologies and strengthen the French blockchain ecosystem.”
The global blockchain ecosystem is “currently booming” but the number of French funds taking part is still very limited, it said.
French digital and AI minister Clara Chappaz said public and private financing was “one of the keys to the sustainable positioning of our ecosystem on the international stage.”
Bpifrance deputy CEO Arnaud Caudoux said that it was convinced of the growing importance that blockchain companies “will take on in the years to come and want to increase French competitiveness and presence in the digital assets field.”
“The US is really accelerating its own crypto strategy, so this is all the more important,” Caudoux said at a press conference, as reported by Reuters. He added that Bpifrance had started to support crypto before the US started its own pro-crypto moves.
Bpifrance’s headquarters in Paris. Source: Google
The bank said it had backed the blockchain sector for a decade and had invested over 150 million euros ($162 million), notably helping to finance the crypto hardware wallet company Ledger in 2014.
Bpifrance said it began testing limited investments through tokens in 2022, including a deal with the DeFi lending platform Morpho to buy its token — which has grown to be the 12th largest protocol by value at $3.24 billion, according to DefiLlama.
Related: Bybit removed from French regulator’s blacklist, eyes MiCA license
Venture capitalists often take part in investments paid in tokens. PitchBook expects crypto VC deals to top $18 billion this year, a marked increase from the $13.6 billion raised in 2024.
Typically, a crypto platform that launches a token will allocate a portion of its supply to financiers subject to varying lockup periods where the tokens can’t be sold.
A portion of the token supply is usually immediately given to select public users in order to drum up liquidity, which can cause token values to slide if they cash out.
Magazine: How crypto laws are changing across the world in 2025
ChatGPT creators OpenAI have introduced rate limits after a viral social media trend that saw nearly everything “Ghiblifyied” — turned into AI art in the style of the famous Japanese animation studio.
OpenAI CEO Sam Altman was one of the first to take part in the trend, posting a portrait of himself generated by the model on March 25 but said in a subsequent post two days later that all image requests have started to tax the firm’s infrastructure.
“It’s super fun seeing people love images in ChatGPT but our GPUs are melting. We are going to temporarily introduce some rate limits while we work on making it more efficient,” he said.
Source: Sam Altman
“Also, we are refusing some generations that should be allowed; we are fixing these as fast we can,” he added.
OpenAI launched the upgraded image generation offering in ChatGPT-4o on March 25, resulting in users splashing images across social media in the art style of Studio Ghibli — known for its anime films Spirited Away and My Neighbor Totoro.
Altman didn’t give a definitive timeline on how long the rate limits would last but said, “Hopefully, it won’t be long! ChatGPT free tier will get three generations per day soon.”
Rate limits are generally applied to help OpenAI manage the aggregate load on its infrastructure, according to OpenAI.
Related: Ghibli memecoins surge as internet flooded with Studio Ghibli-style AI images
“If requests to the API increase dramatically, it could tax the servers and cause performance issues. By setting rate limits, OpenAI can help maintain a smooth and consistent experience for all users,” OpenAI says on its rate limit explanation page.
Along with the legions of others getting in on the trend, X and Tesla CEO Elon Musk shared an image mimicking King Mufasa from Disney’s The Lion King holding up a Shiba Inu.
White House AI and crypto czar David Sacks also joined in, using the Studio Ghibli-art style on an image of himself at an event.
Source: David Sacks
Meanwhile, Bloomberg reported on March 26 that OpenAI expects to more than triple its revenue this year to $12.7 billion, citing a person familiar with the matter.
Altman said on Feb. 12 his firm wants to ship GPT-4.5 and GPT-5 in the coming weeks or months.
Magazine: ‘Chernobyl’ needed to wake people to AI risks, Studio Ghibli memes: AI Eye
The US Securities and Exchange Commission has officially closed its investigation into Crypto.com, with no action taken against the crypto exchange, according to the firm’s CEO, Kris Marszalek.
”They used every tool available to attempt to stifle us, restricting access to banking, auditors, investors, and beyond. It was a calculated attempt to put an end to the industry,” Marszalek said in a March 27 X post.
”The fact that we not only persevered but became stronger is a testament to our vision and the community supporting it. Onwards!”
It comes seven months after the SEC issued a Wells notice to the crypto platform in August, signaling its intention to take legal action against the firm.
We are pleased that the current SEC leadership has made the decision to close its investigation into Crypto.com,” added Crypto.com’s chief legal officer Nick Lundgren in a March 27 statement, which accused the previous administration of abusing its authority to harm the crypto industry.
Source: Kris Marszalek
Crypto.com had filed a lawsuit against the SEC in October, two months after the Wells notice, accusing the Gary Gensler-led commission of overstepping its authority and taking a “misguided” approach to crypto regulation.
SEC continues to roll back previous enforcement actions
Crypto.com’s announcement follows a wave of other crypto investigations and lawsuits dropped by the SEC over the last five weeks, which affected the likes of Coinbase, Consensys, Robinhood, Gemini, Uniswap, OpenSea and more recently, Immutable.
The SEC also dismissed its civil enforcement action against crypto trading firm Cumberland DRW with prejudice on March 27.
Related: Ripple will drop cross-appeal in SEC case, get refund from lower court ruling
The SEC has adopted a far friendlier approach since Mark Uyeda started leading the commission as acting chair on Jan. 20 after the resignation of former chair Gary Gensler. The SEC established a Crypto Task Force led by SEC Commissioner Hester Peirce to support this new approach.
It also canceled a controversial rule that asked financial firms holding crypto to record them as liabilities on their balance sheets on Jan. 23.
Trump’s SEC chair nominee, Paul Atkins, is inching closer to becoming the SEC’s new leader after initially being held back by financial disclosures.
Meanwhile, Crypto.com partnered with Trump Media on March 24 to launch a series of “Made in America”-themed exchange-traded funds later this year.
Crypto.com will provide the infrastructure and custody services to supply the crypto tokens for the ETFs, which may include a basket of tokens, including Bitcoin (BTC), Ether (ETH), Solana (SOL), XRP (XRP) and Cronos (CRO).
Magazine: SEC’s U-turn on crypto leaves key questions unanswered
The US Justice Department (DOJ) seized more than $200,000 in cryptocurrency intended to benefit the militant group Hamas it said in a statement on March 27.
The cryptocurrency with a total value of $201,400 was traced to fundraising addresses allegedly controlled by Hamas and used to launder more than $1.5 million in digital assets since October 2024.
The laundering occurred through a series of “virtual currency exchanges and transactions by leveraging suspected financiers and over-the-counter brokers,” the DOJ said. The funds are currently held in a combination of at least 17 wallets.
Affidavit to seize the Hamas-linked cryptocurrency. Source: US DOJ
In January 2024, the US Treasury’s Office of Foreign Assets Control, along with corresponding organizations in the United Kingdom and Australia, announced sanctions against networks and facilitators of crypto transactions linked to Hamas. Those sanctions were built on US Treasury sanctions from October 2023.
In January 2024, three families of victims of the Hamas attack against Israel sued Binance and its former CEO Changpeng Zhao, alleging that the exchange had provided “substantial assistance” to terrorists. In oral arguments, a lawyer representing Binance claimed the exchange had “no special relationship [with] Hamas […].”
Binance has faced scrutiny from the US government over alleged shortcomings in its Anti-Money Laundering controls. The exchange settled with the DOJ for $4.3 billion in November 2023.
More regulation needed?
According to a December 2024 report by the Congressional Research Service, Hamas has allegedly sought cryptocurrency donations since at least 2019, although the “scale and effectiveness” of these efforts have been unclear.
Terrorist organizations using crypto for fundraising have increasingly drawn the attention of the US, with some officials questioning whether the industry needed more supervision or regulation to stop such behavior.
According to a 2023 Chainalysis report, terrorism financing accounts for a very small amount of crypto usage, with illegal groups sticking to using traditional, fiat-based methods to fund operations.
Magazine: Terrorism and the Israel-Gaza war have been weaponized to destroy crypto