The Official Trump token fell from $48 to $42 after the US president seemed unfamiliar with his own memecoin.
US President Donald Trump has followed through on his promise to pardon Ross Ulbricht, the founder of the online drug market Silk Road, who was imprisoned for over a decade.
Bitcoin price bounces back above $106,000 as the US Dollar Index cools and markets react positively to Trump’s economic agenda.
OpenAI spent $1.76 million on government lobbying in 2024 and $510,000 in the last three months of the year alone, according to a new disclosure filed on Tuesday—a significant jump from 2023, when the company spent just $260,000 on Capitol Hill. The company also disclosed a new in-house lobbyist, Meghan Dorn, who worked for five years for Senator Lindsey Graham and started at OpenAI in October. The filing also shows activity related to two new pieces of legislation in the final months of the year: the House’s AI Advancement and Reliability Act, which would set up a government center for AI research, and the Senate’s Future of Artificial Intelligence Innovation Act, which would create shared benchmark tests for AI models.
OpenAI did not respond to questions about its lobbying efforts.
But perhaps more important, the disclosure is a clear signal of the company’s arrival as a political player, as its first year of serious lobbying ends and Republican control of Washington begins. While OpenAI’s lobbying spending is still dwarfed by its peers’—Meta tops the list of Big Tech spenders, with more than $24 million in 2024—the uptick comes as it and other AI companies have helped redraw the shape of AI policy.
For the past few years, AI policy has been something like a whack-a-mole response to the risks posed by deepfakes and misinformation. But over the last year, AI companies have started to position the success of the technology as pivotal to national security and American competitiveness, arguing that the government must therefore support the industry’s growth. As a result, OpenAI and others now seem poised to gain access to cheaper energy, lucrative national security contracts, and a more lax regulatory environment that’s unconcerned with the minutiae of AI safety.
While the big players seem more or less aligned on this grand narrative, messy divides on other issues are still threatening to break through the harmony on display at President Trump’s inauguration this week.
AI regulation really began in earnest after ChatGPT launched in November 2022. At that point, “a lot of the conversation was about responsibility,” says Liana Keesing, campaigns manager for technology reform at Issue One, a democracy nonprofit that tracks Big Tech’s influence.
Companies were asked what they’d do about sexually abusive deepfake images and election disinformation. “Sam Altman did a very good job coming in and painting himself early as a supporter of that process,” Keesing says.
OpenAI started its official lobbying effort around October 2023, hiring Chan Park—a onetime Senate Judiciary Committee counsel and Microsoft lobbyist—to lead the effort. Lawmakers, particularly then Senate majority leader Chuck Schumer, were vocal about wanting to curb these particular harms; OpenAI hired Schumer’s former legal counsel, Reginald Babin, as a lobbyist, according to data from OpenSecrets. This past summer, the company hired the veteran political operative Chris Lehane as its head of global policy.
OpenAI’s previous disclosures confirm that the company’s lobbyists subsequently focused much of last year on legislation like the No Fakes Act and the Protect Elections from Deceptive AI Act. The bills did not materialize into law. But as the year went on, the regulatory goals of AI companies began to change. “One of the biggest shifts that we’ve seen,” Keesing says, “is that they’ve really started to focus on energy.”
In September, Altman, along with leaders from Nvidia, Anthropic, and Google, visited the White House and pitched the vision that US competitiveness in AI will depend on subsidized energy infrastructure to train the best models. Altman proposed to the Biden administration the construction of multiple five-gigawatt data centers, which would each consume as much electricity as New York City.
Around the same time, companies like Meta and Microsoft started to say that nuclear energy will provide the path forward for AI, announcing deals aimed at firing up new nuclear power plants.
It seems likely OpenAI’s policy team was already planning for this particular shift. In April, the company hired lobbyist Matthew Rimkunas, who worked for Bill Gates’s sustainable energy effort Breakthrough Energies and, before that, spent 16 years working for Senator Graham; the South Carolina Republican serves on the Senate subcommittee that manages nuclear safety.
This new AI energy race is inseparable from the positioning of AI as essential for national security and US competitiveness with China. OpenAI laid out its position in a blog post in October, writing, “AI is a transformational technology that can be used to strengthen democratic values or to undermine them. That’s why we believe democracies should continue to take the lead in AI development.” Then in December, the company went a step further and reversed its policy against working with the military, announcing it would develop AI models with the defense-tech company Anduril to help take down drones around military bases.
That same month, Sam Altman said during an interview with The Free Press that the Biden administration was “not that effective” in shepherding AI: “The things that I think should have been the administration’s priorities, and I hope will be the next administration’s priorities, are building out massive AI infrastructure in the US, having a supply chain in the US, things like that.”
That characterization glosses over the CHIPS Act, a $52 billion stimulus to the domestic chips industry that is, at least on paper, aligned with Altman’s vision. (It also preceded an executive order Biden issued just last week, to lease federal land to host the type of gigawatt-scale data centers that Altman had been asking for.)
Intentionally or not, Altman’s posture aligned him with the growing camaraderie between President Trump and Silicon Valley. Mark Zuckerberg, Elon Musk, Jeff Bezos, and Sundar Pichai all sat directly behind Trump’s family at the inauguration on Monday, and Altman also attended. Many of them had also made sizable donations to Trump’s inaugural fund, with Altman personally throwing in $1 million.
It’s easy to view the inauguration as evidence that these tech leaders are aligned with each other, and with other players in Trump’s orbit. But there are still some key dividing lines that will be worth watching. Notably, there’s the clash over H-1B visas, which allow many noncitizen AI researchers to work in the US. Musk and Vivek Ramaswamy (who is, as of this week, no longer a part of the so-called Department of Government Efficiency) have been pushing for that visa program to be expanded. This sparked backlash from some allies of the Trump administration, perhaps most loudly Steve Bannon.
Another fault line is the battle between open- and closed-source AI. Google and OpenAI prevent anyone from knowing exactly what’s in their most powerful models, often arguing that this keeps them from being used improperly by bad actors. Musk has sued OpenAI and Microsoft over the issue, alleging that closed-source models are antithetical to OpenAI’s hybrid nonprofit structure. Meta, whose Llama model is open-source, recently sided with Musk in that lawsuit. Venture capitalist and Trump ally Marc Andreessen echoed these criticisms of OpenAI on X just hours after the inauguration. (Andreessen has also said that making AI models open-source “makes overbearing regulations unnecessary.”)
Finally, there are the battles over bias and free speech. The vastly different approaches that social media companies have taken to moderating content—including Meta’s recent announcement that it would end its US fact-checking program—raise questions about whether the way AI models are moderated will continue to splinter too. Musk has lamented what he calls the “wokeness” of many leading models, and Andreessen said on Tuesday that “Chinese LLMs are much less censored than American LLMs” (though that’s not quite true, given that many Chinese AI models have government-mandated censorship in place that forbids particular topics). Altman has been more equivocal: “No two people are ever going to agree that one system is perfectly unbiased,” he told The Free Press.
It’s only the start of a new era in Washington, but the White House has been busy. It has repealed many executive orders signed by President Biden, including the landmark order on AI that imposed rules for government use of the technology (while it appears to have kept Biden’s order on leasing land for more data centers). Altman is busy as well. OpenAI, Oracle, and SoftBank reportedly plan to spend up to $500 billion on a joint venture for new data centers; the project was announced by President Trump, with Altman standing alongside. And according to Axios, Altman will also be part of a closed-door briefing with government officials on January 30, reportedly about OpenAI’s development of a powerful new AI agent.
The United States and China are entangled in what many have dubbed an “AI arms race.”
In the early days of this standoff, US policymakers drove an agenda centered on “winning” the race, mostly from an economic perspective. In recent months, leading AI labs such as OpenAI and Anthropic got involved in pushing the narrative of “beating China” in what appeared to be an attempt to align themselves with the incoming Trump administration. The belief that the US can win in such a race was based mostly on the early advantage it had over China in advanced GPU compute resources and the effectiveness of AI’s scaling laws.
But now it appears that access to large quantities of advanced compute resources is no longer the defining or sustainable advantage many had thought it would be. In fact, the capability gap between leading US and Chinese models has essentially disappeared, and in one important way the Chinese models may now have an advantage: They are able to achieve near equivalent results while using only a small fraction of the compute resources available to the leading Western labs.
The AI competition is increasingly being framed within narrow national security terms, as a zero-sum game, and influenced by assumptions that a future war between the US and China, centered on Taiwan, is inevitable. The US has employed “chokepoint” tactics to limit China’s access to key technologies like advanced semiconductors, and China has responded by accelerating its efforts toward self-sufficiency and indigenous innovation, which is causing US efforts to backfire.
Recently even outgoing US Secretary of Commerce Gina Raimondo, a staunch advocate for strict export controls, finally admitted that using such controls to hold back China’s progress on AI and advanced semiconductors is a “fool’s errand.” Ironically, the unprecedented export control packages targeting China’s semiconductor and AI sectors have unfolded alongside tentative bilateral and multilateral engagements to establish AI safety standards and governance frameworks—highlighting a paradoxical desire of both sides to compete and cooperate.
When we consider this dynamic more deeply, it becomes clear that the real existential threat ahead is not from China, but from the weaponization of advanced AI by bad actors and rogue groups who seek to create broad harms, gain wealth, or destabilize society. As with nuclear arms, China, as a nation-state, must be careful about using AI-powered capabilities against US interests, but bad actors, including extremist organizations, would be much more likely to abuse AI capabilities with little hesitation. Given the asymmetric nature of AI technology, which is much like cyberweapons, it is very difficult to fully prevent and defend against a determined foe who has mastered its use and intends to deploy it for nefarious ends.
Given the ramifications, it is incumbent on the US and China as global leaders in developing AI technology to jointly identify and mitigate such threats, collaborate on solutions, and cooperate on developing a global framework for regulating the most advanced models—instead of erecting new fences, small or large, around AI technologies and pursing policies that deflect focus from the real threat.
It is now clearer than ever that despite the high stakes and escalating rhetoric, there will not and cannot be any long-term winners if the intense competition continues on its current path. Instead, the consequences could be severe—undermining global stability, stalling scientific progress, and leading both nations toward a dangerous technological brinkmanship. This is particularly salient given the importance of Taiwan and the global foundry leader TSMC in the AI stack, and the increasing tensions around the high-tech island.
Heading blindly down this path will bring the risk of isolation and polarization, threatening not only international peace but also the vast potential benefits AI promises for humanity as a whole.
Historical narratives, geopolitical forces, and economic competition have all contributed to the current state of the US-China AI rivalry. A recent report from the US-China Economic and Security Review Commission, for example, frames the entire issue in binary terms, focused on dominance or subservience. This “winner takes all” logic overlooks the potential for global collaboration and could even provoke a self-fulfilling prophecy by escalating conflict. Under the new Trump administration this dynamic will likely become more accentuated, with increasing discussion of a Manhattan Project for AI and redirection of US military resources from Ukraine toward China.
Fortunately, a glimmer of hope for a responsible approach to AI collaboration is appearing now as Donald Trump recently posted on January 17 that he’d restarted direct dialogue with Chairman Xi Jinping regarding various areas of collaboration, and given past cooperation should continue to be “partners and friends.” The outcome of the TikTok drama, putting Trump at odds with sharp China critics in his own administration and Congress, will be a preview of how his efforts to put US China relations on a less confrontational trajectory.
The promise of AI for good
Western mass media usually focuses on attention-grabbing issues described in terms like the “existential risks of evil AI.” Unfortunately, the AI safety experts who get the most coverage often recite the same narratives, scaring the public. In reality, no credible research shows that more capable AI will become increasingly evil. We need to challenge the current false dichotomy of pure accelerationism versus doomerism to allow for a model more like collaborative acceleration.
It is important to note the significant difference between the way AI is perceived in Western developed countries and developing countries. In developed countries the public sentiment toward AI is 60% to 70% negative, while in the developing markets the positive ratings are 60% to 80%. People in the latter places have seen technology transform their lives for the better in the past decades and are hopeful AI will help solve the remaining issues they face by improving education, health care, and productivity, thereby elevating their quality of life and giving them greater world standing. What Western populations often fail to realize is that those same benefits could directly improve their lives as well, given the high levels of inequity even in developed markets. Consider what progress would be possible if we reallocated the trillions that go into defense budgets each year to infrastructure, education, and health-care projects.
Once we get to the next phase, AI will help us accelerate scientific discovery, develop new drugs, extend our health span, reduce our work obligations, and ensure access to high-quality education for all. This may sound idealistic, but given current trends, most of this can become a reality within a generation, and maybe sooner. To get there we’ll need more advanced AI systems, which will be a much more challenging goal if we divide up compute/data resources and research talent pools. Almost half of all top AI researchers globally (47%) were born or educated in China, according to industry studies. It’s hard to imagine how we could have gotten where we are without the efforts of Chinese researchers. Active collaboration with China on joint AI research could be pivotal to supercharging progress with a major infusion of quality training data and researchers.
The escalating AI competition between the US and China poses significant threats to both nations and to the entire world. The risks inherent in this rivalry are not hypothetical—they could lead to outcomes that threaten global peace, economic stability, and technological progress. Framing the development of artificial intelligence as a zero-sum race undermines opportunities for collective advancement and security. Rather than succumb to the rhetoric of confrontation, it is imperative that the US and China, along with their allies, shift toward collaboration and shared governance.
Our recommendations for policymakers:
- Reduce national security dominance over AI policy. Both the US and China must recalibrate their approach to AI development, moving away from viewing AI primarily as a military asset. This means reducing the emphasis on national security concerns that currently dominate every aspect of AI policy. Instead, policymakers should focus on civilian applications of AI that can directly benefit their populations and address global challenges, such as health care, education, and climate change. The US also needs to investigate how to implement a possible universal basic income program as job displacement from AI adoption becomes a bigger issue domestically.
- 2. Promote bilateral and multilateral AI governance. Establishing a robust dialogue between the US, China, and other international stakeholders is crucial for the development of common AI governance standards. This includes agreeing on ethical norms, safety measures, and transparency guidelines for advanced AI technologies. A cooperative framework would help ensure that AI development is conducted responsibly and inclusively, minimizing risks while maximizing benefits for all.
- 3. Expand investment in detection and mitigation of AI misuse. The risk of AI misuse by bad actors, whether through misinformation campaigns, telecom, power, or financial system attacks, or cybersecurity attacks with the potential to destabilize society, is the biggest existential threat to the world today. Dramatically increasing funding for and international cooperation in detecting and mitigating these risks is vital. The US and China must agree on shared standards for the responsible use of AI and collaborate on tools that can monitor and counteract misuse globally.
- 4. Create incentives for collaborative AI research. Governments should provide incentives for academic and industry collaborations across borders. By creating joint funding programs and research initiatives, the US and China can foster an environment where the best minds from both nations contribute to breakthroughs in AI that serve humanity as a whole. This collaboration would help pool talent, data, and compute resources, overcoming barriers that neither country could tackle alone. A global effort akin to the CERN for AI will bring much more value to the world, and a peaceful end, than a Manhattan Project for AI, which is being promoted by many in Washington today.
- 5. Establish trust-building measures. Both countries need to prevent misinterpretations of AI-related actions as aggressive or threatening. They could do this via data-sharing agreements, joint projects in nonmilitary AI, and exchanges between AI researchers. Reducing import restrictions for civilian AI use cases, for example, could help the nations rebuild some trust and make it possible for them to discuss deeper cooperation on joint research. These measures would help build transparency, reduce the risk of miscommunication, and pave the way for a less adversarial relationship.
- 6. Support the development of a global AI safety coalition. A coalition that includes major AI developers from multiple countries could serve as a neutral platform for addressing ethical and safety concerns. This coalition would bring together leading AI researchers, ethicists, and policymakers to ensure that AI progresses in a way that is safe, fair, and beneficial to all. This effort should not exclude China, as it remains an essential partner in developing and maintaining a safe AI ecosystem.
- 7. Shift the focus toward AI for global challenges. It is crucial that the world’s two AI superpowers use their capabilities to tackle global issues, such as climate change, disease, and poverty. By demonstrating the positive societal impacts of AI through tangible projects and presenting it not as a threat but as a powerful tool for good, the US and China can reshape public perception of AI.
Our choice is stark but simple: We can proceed down a path of confrontation that will almost certainly lead to mutual harm, or we can pivot toward collaboration, which offers the potential for a prosperous and stable future for all. Artificial intelligence holds the promise to solve some of the greatest challenges facing humanity, but realizing this potential depends on whether we choose to race against each other or work together.
The opportunity to harness AI for the common good is a chance the world cannot afford to miss.
Alvin Wang Graylin
Alvin Wang Graylin is a technology executive, author, investor, and pioneer with over 30 years of experience shaping innovation in AI, XR (extended reality), cybersecurity, and semiconductors. Currently serving as global vice president at HTC, Graylin was the company’s China president from 2016 to 2023. He is the author of Our Next Reality.
Paul Triolo
Paul Triolo is a partner for China and technology policy lead at DGA-Albright Stonebridge Group. He advises clients in technology, financial services, and other sectors as they navigate complex political and regulatory matters in the US, China, the European Union, India, and around the world.
Forget massive steel tanks—some scientists want to make chemicals with the help of rocks deep beneath Earth’s surface.
New research shows that ammonia, a chemical crucial for fertilizer, can be produced from rocks at temperatures and pressures that are common in the subsurface. The research was published today in Joule, and MIT Technology Review can exclusively report that a new company, called Addis Energy, was founded to commercialize the process.
Ammonia is used in most fertilizers and is a vital part of our modern food system. It’s also being considered for use as a green fuel in industries like transoceanic shipping. The problem is that current processes used to make ammonia require a lot of energy and produce huge amounts of the greenhouse gases that cause climate change—over 1% of the global total. The new study finds that the planet’s internal conditions can be used to produce ammonia in a much cleaner process.
“Earth can be a factory for chemical production,” says Iwnetim Abate, an MIT professor and author of the new study.
This idea could be a major change for the chemical industry, which today relies on huge facilities running reactions at extremely high temperatures and pressures to make ammonia.
The key ingredients for ammonia production are sources of nitrogen and hydrogen. Much of the focus on cleaner production methods currently lies in finding new ways to make hydrogen, since that chemical makes up the bulk of ammonia’s climate footprint, says Patrick Molloy, a principal at the nonprofit research agency Rocky Mountain Institute.
Recently, researchers and companies have located naturally occurring deposits of hydrogen underground. Iron-rich rocks tend to drive reactions that produce the gas, and these natural deposits could provide a source of low-cost, low-emissions hydrogen.
While geologic hydrogen is still in its infancy as an industry, some researchers are hoping to help the process along by stimulating production of hydrogen underground. With the right rocks, heat, and a catalyst, you can produce hydrogen cheaply and without emitting large amounts of climate pollution.
Hydrogen can be difficult to transport, though, so Abate was interested in going one step further by letting the conditions underground do the hard work in powering chemical reactions that transform hydrogen and nitrogen into ammonia. “As you dig, you get heat and pressure for free,” he says.
To test out how this might work, Abate and his team crushed up iron-rich minerals and added nitrates (a nitrogen source), water (a hydrogen source), and a catalyst to help reactions along in a small reactor in the lab. They found that even at relatively low temperatures and pressures, they could make ammonia in a matter of hours. If the process were scaled up, the researchers estimate, one well could produce 40,000 kilograms of ammonia per day.
While the reactions tend to go faster at high temperature and pressure, the researchers found that ammonia production could be an economically viable process even at 130 °C (266 °F) and a little over two atmospheres of pressure, conditions that would be accessible at depths reachable with existing drilling technology.
While the reactions work in the lab, there’s a lot of work to do to determine whether, and how, the process might actually work in the field. One thing the team will need to figure out is how to keep reactions going, because in the reaction that forms ammonia, the surface of the iron-rich rocks will be oxidized, leaving them in a state where they can’t keep reacting. But Abate says the team is working on controlling how thick the unusable layer of rock is, and its composition, so the chemical reactions can continue.
To commercialize this work, Abate is cofounding a company called Addis Energy with $4.25 million in pre-seed funds from investors including Engine Ventures. His cofounders include Michael Alexander and Charlie Mitchell (who have both spent time in the oil and gas industry) and Yet-Ming Chiang, an MIT professor and serial entrepreneur. The company will work on scaling up the research, including finding potential sites with the geological conditions to produce ammonia underground.
The good news for scale-up efforts is that much of the necessary technology already exists in oil and gas operations, says Alexander, Addis’s CEO. A field-deployed system will involve drilling, pumping fluid down into the ground, and extracting other fluids from beneath the surface, all very common operations in that industry. “There’s novel chemistry that’s wrapped in an oil and gas package,” he says.
The team will also work on refining cost estimates for the process and gaining a better understanding of safety and sustainability, Abate says. Ammonia is a toxic industrial chemical, but it’s common enough for there to be established procedures for handling, storing, and transporting it, says RMI’s Molloy.
Judging from the researchers’ early estimates, ammonia produced with this method could cost up to $0.55 per kilogram. That’s more than ammonia produced with fossil fuels today ($0.40/kg), but the technique would likely be less expensive than other low-emissions methods of producing the chemical. Tweaks to the process, including using nitrogen from the air instead of nitrates, could help cut costs further, even as low as $0.20/kg.
New approaches to making ammonia could be crucial for climate efforts. “It’s a chemical that’s essential to our way of life,” says Karthish Manthiram, a professor at Caltech who studies electrochemistry, including alternative ammonia production methods.
The team’s research appears to be designed with scalability in mind from the outset, and using Earth itself as a reactor is the kind of thinking needed to accelerate the long-term journey to sustainable chemical production, Manthiram adds.
While the company focuses on scale-up efforts, there’s plenty of fundamental work left for Abate and other labs to do to understand what’s going on during the reactions at the atomic level, particularly at the interface between the rocks and the reacting fluid.
Research in the lab is exciting, but it’s only the first step, Abate says. The next one is seeing if this actually works in the field.
Correction: Due to a unit typo in the journal article, a previous version of this story misstated the amount of ammonia each well could theoretically produce. The estimate is 40,000 kilograms of ammonia per day, not 40,000 tons.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
Why it’s so hard to use AI to diagnose cancer
Finding and diagnosing cancer is all about spotting patterns. Radiologists use x-rays and magnetic resonance imaging to illuminate tumors, and pathologists examine tissue from kidneys, livers, and other areas under microscopes. They look for patterns that show how severe a cancer is, whether particular treatments could work, and where the malignancy may spread.
Visual analysis is something that AI has gotten quite good at since the first image recognition models began taking off nearly 15 years ago. Even though no model will be perfect, you can imagine a powerful algorithm someday catching something that a human pathologist missed, or at least speeding up the process of getting a diagnosis.
We’re starting to see lots of new efforts to build such a model—at least seven attempts in the last year alone. But they all remain experimental. What will it take to make them good enough to be used in the real world? Read the full story.
—James O’Donnell
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
Long-acting HIV prevention meds: 10 Breakthrough Technologies 2025
In June 2024, results from a trial of a new medicine to prevent HIV were announced—and they were jaw-dropping. Lenacapavir, a treatment injected once every six months, protected over 5,000 girls and women in Uganda and South Africa from getting HIV. And it was 100% effective.
So far, the FDA has approved the drug only for people who already have HIV that’s resistant to other treatments. But its producer Gilead has signed licensing agreements with manufacturers to produce generic versions for HIV prevention in 120 low-income countries.
The United Nations has set a goal of ending AIDS by 2030. It’s ambitious, to say the least: We still see over 1 million new HIV infections globally every year. But we now have the medicines to get us there. What we need is access. Read the full story.
—Jessica Hamzelou
Long-acting HIV prevention meds is one of our 10 Breakthrough Technologies for 2025, MIT Technology Review’s annual list of tech to watch. Check out the rest of the list, and cast your vote for the honorary 11th breakthrough.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Donald Trump signed an executive order delaying TikTok’s ban
Parent company ByteDance has 75 days to reach a deal to stay live in the US. (WP $)
+ China appears to be keen to keep the platform operating, too. (WSJ $)
2 Neo-Nazis are celebrating Elon Musk’s salutes
They’re thrilled by the two Nazi-like salutes he gave at a post-inauguration rally. (Wired $)
+ Whether the gestures were intentional or not, extremists have chosen to interpret them that way. (Rolling Stone $)
+ MAGA is all about granting unchecked power to the already powerful. (Vox)
+ How tech billionaires are hoping Trump will reward them for their support. (NY Mag $)
3 Trump is withdrawing the US from the World Health Organization
He’s accused the agency of mishandling the covid 19 pandemic. (Ars Technica)+ He first tried to leave the WHO in 2020, but failed to complete it before he left office. (Reuters)
+ Trump is also working on pulling the US out of the Paris climate agreement. (The Verge)
4 Meta will keep using fact checkers outside the US—for now
It wants to see how its crowdsourced fact verification system works in America before rolling it out further. (Bloomberg $)
5 Startup Friend has delayed shipments of its AI necklace
Customers are unlikely to receive their pre-orders before Q3. (TechCrunch)
+ Introducing: The AI Hype Index. (MIT Technology Review)
6 This sophisticated tool can pinpoint where a photo was taken in seconds
Members of the public have been trying to use GeoSpy for nefarious means for months. (404 Media)
7 Los Angeles is covered in ash
And it could take years before it fully disappears. (The Atlantic $)
8 Singapore is turning to AI companions to care for its elders
Robots are filling the void left by an absence of human nurses. (Rest of World)
+ Inside Japan’s long experiment in automating elder care. (MIT Technology Review)
9 The lost art of using a pen
Typing and swiping are replacing good old fashioned paper and ink. (The Guardian)
10 LinkedIn is getting humorous
Posts are getting more personal, with a decidedly comedic bent. (FT $)
Quote of the day
“It’s been really beautiful to watch how two communities that would be considered polar opposites have come together.”
—Khalil Bowens, a content creator based in Los Angeles, reflects on the influx of Americans joining Chinese social media app Xiaohongshu to the Wall Street Journal.
The big story
Inside the messy ethics of making war with machines
August 2023
In recent years, intelligent autonomous weapons—weapons that can select and fire upon targets without any human input—have become a matter of serious concern. Giving an AI system the power to decide matters of life and death would radically change warfare forever.
Intelligent autonomous weapons that fully displace human decision-making have (likely) yet to see real-world use.
However, these systems have become sophisticated enough to raise novel questions—ones that are surprisingly tricky to answer. What does it mean when a decision is only part human and part machine? And when, if ever, is it ethical for that decision to be a decision to kill? Read the full story.
—Arthur Holland Michel
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ Baby octopuses aren’t just cute—they can change color from the moment they’re born
+ Nintendo artist Takaya Imamura played a key role in making the company the gaming juggernaut it is today.
+ David Lynch wasn’t just a master of imagery, the way he deployed music to creep us out was second to none.
+ Only got a bag of rice in the cupboard? No problem.
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
Peering into the body to find and diagnose cancer is all about spotting patterns. Radiologists use x-rays and magnetic resonance imaging to illuminate tumors, and pathologists examine tissue from kidneys, livers, and other areas under microscopes and look for patterns that show how severe a cancer is, whether particular treatments could work, and where the malignancy may spread.
In theory, artificial intelligence should be great at helping out. “Our job is pattern recognition,” says Andrew Norgan, a pathologist and medical director of the Mayo Clinic’s digital pathology platform. “We look at the slide and we gather pieces of information that have been proven to be important.”
Visual analysis is something that AI has gotten quite good at since the first image recognition models began taking off nearly 15 years ago. Even though no model will be perfect, you can imagine a powerful algorithm someday catching something that a human pathologist missed, or at least speeding up the process of getting a diagnosis. We’re starting to see lots of new efforts to build such a model—at least seven attempts in the last year alone—but they all remain experimental. What will it take to make them good enough to be used in the real world?
Details about the latest effort to build such a model, led by the AI health company Aignostics with the Mayo Clinic, were published on arXiv earlier this month. The paper has not been peer-reviewed, but it reveals much about the challenges of bringing such a tool to real clinical settings.
The model, called Atlas, was trained on 1.2 million tissue samples from 490,000 cases. Its accuracy was tested against six other leading AI pathology models. These models compete on shared tests like classifying breast cancer images or grading tumors, where the model’s predictions are compared with the correct answers given by human pathologists. Atlas beat rival models on six out of nine tests. It earned its highest score for categorizing cancerous colorectal tissue, reaching the same conclusion as human pathologists 97.1% of the time. For another task, though—classifying tumors from prostate cancer biopsies—Atlas beat the other models’ high scores with a score of just 70.5%. Its average across nine benchmarks showed that it got the same answers as human experts 84.6% of the time.
Let’s think about what this means. The best way to know what’s happening to cancerous cells in tissues is to have a sample examined by a pathologist, so that’s the performance that AI models are measured against. The best models are approaching humans in particular detection tasks but lagging behind in many others. So how good does a model have to be to be clinically useful?
“Ninety percent is probably not good enough. You need to be even better,” says Carlo Bifulco, chief medical officer at Providence Genomics and co-creator of GigaPath, one of the other AI pathology models examined in the Mayo Clinic study. But, Bifulco says, AI models that don’t score perfectly can still be useful in the short term, and could potentially help pathologists speed up their work and make diagnoses more quickly.
What obstacles are getting in the way of better performance? Problem number one is training data.
“Fewer than 10% of pathology practices in the US are digitized,” Norgan says. That means tissue samples are placed on slides and analyzed under microscopes, and then stored in massive registries without ever being documented digitally. Though European practices tend to be more digitized, and there are efforts underway to create shared data sets of tissue samples for AI models to train on, there’s still not a ton to work with.
Without diverse data sets, AI models struggle to identify the wide range of abnormalities that human pathologists have learned to interpret. That includes for rare diseases, says Maximilian Alber, cofounder and CTO of Aignostics. Scouring the publicly available databases for tissue samples of particularly rare diseases, “you’ll find 20 samples over 10 years,” he says.
Around 2022, the Mayo Clinic foresaw that this lack of training data would be a problem. It decided to digitize all of its own pathology practices moving forward, along with 12 million slides from its archives dating back decades (patients had consented to their being used for research). It hired a company to build a robot that began taking high-resolution photos of the tissues, working through up to a million samples per month. From these efforts, the team was able to collect the 1.2 million high-quality samples used to train the Mayo model.
This brings us to problem number two for using AI to spot cancer. Tissue samples from biopsies are tiny—often just a couple of millimeters in diameter—but are magnified to such a degree that digital images of them contain more than 14 billion pixels. That makes them about 287,000 times larger than images used to train the best AI image recognition models to date.
“That obviously means lots of storage costs and so forth,” says Hoifung Poon, an AI researcher at Microsoft who worked with Bifulco to create GigaPath, which was featured in Nature last year. But it also forces important decisions about which bits of the image you use to train the AI model, and which cells you might miss in the process. To make Atlas, the Mayo Clinic used what’s referred to as a tile method, essentially creating lots of snapshots from the same sample to feed into the AI model. Figuring out how to select these tiles is both art and science, and it’s not yet clear which ways of doing it lead to the best results.
Thirdly, there’s the question of which benchmarks are most important for a cancer-spotting AI model to perform well on. The Atlas researchers tested their model in the challenging domain of molecular-related benchmarks, which involves trying to find clues from sample tissue images to guess what’s happening on a molecular level. Here’s an example: Your body’s mismatch repair genes are of particular concern for cancer, because they catch errors made when your DNA gets replicated. If these errors aren’t caught, they can drive the development and progression of cancer.
“Some pathologists might tell you they kind of get a feeling when they think something’s mismatch-repair deficient based on how it looks,” Norgan says. But pathologists don’t act on that gut feeling alone. They can do molecular testing for a more definitive answer. What if instead, Norgan says, we can use AI to predict what’s happening on the molecular level? It’s an experiment: Could the AI model spot underlying molecular changes that humans can’t see?
Generally no, it turns out. Or at least not yet. Atlas’s average for the molecular testing was 44.9%. That’s the best performance for AI so far, but it shows this type of testing has a long way to go.
Bifulco says Atlas represents incremental but real progress. “My feeling, unfortunately, is that everybody’s stuck at a similar level,” he says. “We need something different in terms of models to really make dramatic progress, and we need larger data sets.”
Now read the rest of The Algorithm
Deeper Learning
OpenAI has created an AI model for longevity science
AI has long had its fingerprints on the science of protein folding. But OpenAI now says it’s created a model that can engineer proteins, turning regular cells into stem cells. That goal has been pursued by companies in longevity science, because stem cells can produce any other tissue in the body and, in theory, could be a starting point for rejuvenating animals, building human organs, or providing supplies of replacement cells.
Why it matters: The work was a product of OpenAI’s collaboration with the longevity company Retro Labs, in which Sam Altman invested $180 million. It represents OpenAI’s first model focused on biological data and its first public claim that its models can deliver scientific results. The AI model reportedly engineered more effective proteins, and more quickly, than the company’s scientists could. But outside scientists can’t evaluate the claims until the studies have been published. Read more from Antonio Regalado.
Bits and Bytes
What we know about the TikTok ban
The popular video app went dark in the United States late Saturday and then came back around noon on Sunday, even as a law banning it took effect. (The New York Times)
Why Meta might not end up like X
X lost lots of advertising dollars as Elon Musk changed the platform’s policies. But Facebook and Instagram’s massive scale make them hard platforms for advertisers to avoid. (Wall Street Journal)
What to expect from Neuralink in 2025
More volunteers will get Elon Musk’s brain implant, but don’t expect a product soon. (MIT Technology Review)
A former fact-checking outlet for Meta signed a new deal to help train AI models
Meta paid media outlets like Agence France-Presse for years to do fact checking on its platforms. Since Meta announced it would shutter those programs, Europe’s leading AI company, Mistral, has signed a deal with AFP to use some of its content in its AI models. (Financial Times)
OpenAI’s AI reasoning model “thinks” in Chinese sometimes, and no one really knows why
While it comes to its response, the model often switches to Chinese, perhaps a reflection of the fact that many data labelers are based in China. (Tech Crunch)
TikTok is available in the U.S. for now, but it’s not in the clear yet.
Previously, Instagram had advised creators against posting longer Reels.