Donald Trump’s team is considering handing the regulation of crypto exchanges and spot markets for cryptocurrencies deemed commodities to the CFTC.
A group of artists and early testers leaked access to OpenAI’s unreleased Sora tool, claiming to have been exploited in its research and development phase.
Ethereum block builders Beaverbuild and Titan Builder have made around 88% of the blockchain’s blocks in recent weeks, and now BuilderNet aims to disrupt that.
A US appeals court ruled the Treasury’s OFAC “overstepped” when it sanctioned crypto mixer Tornado Cash’s immutable smart contracts.
New legislation aims to establish a sovereign federal Bitcoin Reserve, potentially enhancing asset diversification and economic resilience.
In this exclusive webcast, we delve into the transformative potential of portable microservices for the deployment of generative AI models. We explore how startups and large organizations are leveraging this technology to streamline generative AI deployment, enhance customer service, and drive innovation across domains, including chatbots, document analysis, and video generation.
Our discussion focuses on overcoming key challenges such as deployment complexity, security, and cost management. We also discuss how microservices can help executives realize business value with generative AI while maintaining control over data and intellectual property.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
The way we measure progress in AI is terrible
Every time a new AI model is released, it’s typically touted as acing its performance against a series of benchmarks. OpenAI’s GPT-4o, for example, was launched in May with a compilation of results that showed its performance topping every other AI company’s latest model in several tests.
The problem is that these benchmarks are poorly designed, the results hard to replicate, and the metrics they use are frequently arbitrary, according to new research. That matters because AI models’ scores against these benchmarks determine the level of scrutiny they receive.
AI companies frequently cite benchmarks as testament to a new model’s success, and those benchmarks already form part of some governments’ plans for regulating AI. But right now, they might not be good enough to use that way—and researchers have some ideas for how they should be improved.
—Scott J Mulligan
We need to start wrestling with the ethics of AI agents
Generative AI models have become remarkably good at conversing with us, and creating images, videos, and music for us, but they’re not all that good at doing things for us.
AI agents promise to change that. Last week researchers published a new paper explaining how they trained simulation agents to replicate 1,000 people’s personalities with stunning accuracy.
AI models that mimic you could go out and act on your behalf in the near future. If such tools become cheap and easy to build, it will raise lots of new ethical concerns, but two in particular stand out. Read the full story.
—James O’Donnell
This story is from The Algorithm, our weekly AI newsletter. Sign up to receive it in your inbox every Monday.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Donald Trump has pledged special tariffs for China, Canada and Mexico
He says it’s to prevent drug trafficking and illegal migration into the US. (WP $)
+ The tariffs are bad news for Chinese EV firm BYD’s planned factory in Mexico. (WSJ $)
+ How Trump’s tariffs could drive up the cost of batteries, EVs, and more. (MIT Technology Review)
2 Maternal doctors are leaving Texas
Abortion restrictions make it much harder to administer miscarriage care. (New Yorker $)
+ Porsha Ngumezi is the third woman known to have died under the state’s ban. (ProPublica)
3 Bluesky has been accused of breaching EU data rules
It’s failed to declare how many EU users it has and where it’s legally based. (FT $)
+ Bluesky says it’s working to comply with the disclosure rules. (The Information $)
4 How Amazon plans to take on Nvidia
Its engineers are racing to get its AI chips running reliably in data centers by the end of the year. (Bloomberg $)
+ What’s next in chips. (MIT Technology Review)
5 Neuralink will test whether its brain implant can control a robotic arm
If it can, it’ll be the first wireless brain-computer interface to do so. (Wired $)
+ Meet the other companies developing brain-computer interfaces. (MIT Technology Review)
6 Your Pokémon Go data could be bought by militaries and governments
Parent company Niantic hasn’t ruled it out. (404 Media)
7 Inside Google’s little-known nuclear energy research group
It’s quietly been seeking to further our understanding of nuclear energy for years. (IEEE Spectrum)
+ Why the lifetime of nuclear plants is getting longer. (MIT Technology Review)
8 US farms desperately need fresh water
New desalination projects could help make abundant saltwater more plant-friendly. (Knowable Magazine)
+ How we drained California dry. (MIT Technology Review)
9 Nvidia’s new AI model creates entirely new sounds
Including a screaming saxophone and an angry cello. (Ars Technica)
+ These impossible instruments could change the future of music. (MIT Technology Review)
10 We may finally know what causes mysterious radio flashes from space
Asteroids and comets bashing into neutron stars could be behind them. (New Scientist $)
Quote of the day
“Did we change Big Tech? My answer is no.”
—Tommaso Valletti, an economist who worked under the European Union’s antitrust regulator Margrethe Vestager, reflects on her legacy as she prepares to step down to the New York Times.
The big story
How to fix the internet
October 2023
We’re in a very strange moment for the internet. We all know it’s broken. But there’s a sense that things are about to change. The stranglehold that the big social platforms have had on us for the last decade is weakening.
There’s a sort of common wisdom that the internet is irredeemably bad. That social platforms, hungry to profit off your data, opened a Pandora’s box that cannot be closed.
But the internet has also provided a haven for marginalized groups and a place for support. It offers information at times of crisis. It can connect you with long-lost friends. It can make you laugh.
The internet is worth fighting for because despite all the misery, there’s still so much good to be found there. And yet, fixing online discourse is the definition of a hard problem. But don’t worry. I have an idea. Read the full story.
—Katie Notopoulos
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)
+ America is super into republishing classic literature these days.
+ I’m convinced there’s nothing more innovative and daring than a hungry cat (thanks Dorothy!)
+ Gen Z famously loves to mock the way millennials dress, but needless to say: we’ve had the last laugh.
+ How music influences math, believe it or not.
This story is from The Algorithm, our weekly newsletter on AI. To get it in your inbox first, sign up here.
Generative AI models have become remarkably good at conversing with us, and creating images, videos, and music for us, but they’re not all that good at doing things for us.
AI agents promise to change that. Think of them as AI models with a script and a purpose. They tend to come in one of two flavors.
The first, called tool-based agents, can be coached using natural human language (rather than coding) to complete digital tasks for us. Anthropic released one such agent in October—the first from a major AI model-maker—that can translate instructions (“Fill in this form for me”) into actions on someone’s computer, moving the cursor to open a web browser, navigating to find data on relevant pages, and filling in a form using that data. Salesforce has released its own agent too, and OpenAI reportedly plans to release one in January.
The other type of agent is called a simulation agent, and you can think of these as AI models designed to behave like human beings. The first people to work on creating these agents were social science researchers. They wanted to conduct studies that would be expensive, impractical, or unethical to do with real human subjects, so they used AI to simulate subjects instead. This trend particularly picked up with the publication of an oft-cited 2023 paper by Joon Sung Park, a PhD candidate at Stanford, and colleagues called “Generative Agents: Interactive Simulacra of Human Behavior.”
Last week Park and his team published a new paper on arXiv called “Generative Agent Simulations of 1,000 People.” In this work, researchers had 1,000 people participate in two-hour interviews with an AI. Shortly after, the team was able to create simulation agents that replicated each participant’s values and preferences with stunning accuracy.
There are two really important developments here. First, it’s clear that leading AI companies think it’s no longer good enough to build dazzling generative AI tools; they now have to build agents that can accomplish things for people. Second, it’s getting easier than ever to get such AI agents to mimic the behaviors, attitudes, and personalities of real people. What were once two distinct types of agents—simulation agents and tool-based agents—could soon become one thing: AI models that can not only mimic your personality but go out and act on your behalf.
Research on this is underway. Companies like Tavus are hard at work helping users create “digital twins” of themselves. But the company’s CEO, Hassaan Raza, envisions going further, creating AI agents that can take the form of therapists, doctors, and teachers.
If such tools become cheap and easy to build, it will raise lots of new ethical concerns, but two in particular stand out. The first is that these agents could create even more personal, and even more harmful, deepfakes. Image generation tools have already made it simple to create nonconsensual pornography using a single image of a person, but this crisis will only deepen if it’s easy to replicate someone’s voice, preferences, and personality as well. (Park told me he and his team spent more than a year wrestling with ethical issues like this in their latest research project, engaging in many conversations with Stanford’s ethics board and drafting policies on how the participants could withdraw their data and contributions.)
The second is the fundamental question of whether we deserve to know whether we’re talking to an agent or a human. If you complete an interview with an AI and submit samples of your voice to create an agent that sounds and responds like you, are your friends or coworkers entitled to know when they’re talking to it and not to you? On the other side, if you ring your cell service provider or doctor’s office and a cheery customer service agent answers the line, are you entitled to know whether you’re talking to an AI?
This future feels far off, but it isn’t. There’s a chance that when we get there, there will be even more pressing and pertinent ethical questions to ask. In the meantime, read more from my piece on AI agents here, and ponder how well you think an AI interviewer could get to know you in two hours.
Now read the rest of The Algorithm
Deeper Learning
Inside Clear’s ambitions to manage your identity beyond the airport
Clear is the most visible biometrics company around, and one you’ve likely interacted with already, whether passing security checkpoints at airports and stadiums or verifying your identity on LinkedIn. Along the way, it’s built one of the largest private repositories of identity data on the planet, including scans of fingerprints, irises, and faces. A confluence of factors is now accelerating the adoption of identity verification technologies—including AI, of course, as well as the lingering effects of the pandemic’s push toward “contactless” experiences—and Clear aims to be the ubiquitous provider of these services. In the near future, countless situations where you might need an ID or credit card might require no more than showing your face.
Why this matters: Now that biometrics have gone mainstream, what—and who—bears the cost? Because this convenience, even if chosen by only some of us, leaves all of us wrestling with the effects. If Clear gains ground in its vision, it will move us toward a world where we’re increasingly obligated to give up our biometric data to a system that’s vulnerable to data leaks. Read more from Eileen Guo.
Bits and Bytes
Inside the booming “AI pimping” industry
Instagram is being flooded with hundreds of AI-generated influencers who are stealing videos from real models and adult content creators, giving them AI-generated faces, and monetizing their bodies with links to dating sites, Patreon, OnlyFans competitors, and various AI apps. (404 Media)
How to protect your art from AI
There is little you can do if your work has already been scraped into a data set, but you can take steps to prevent future work from being used that way. Here are four ways to do that. (MIT Technology Review)
Elon Musk and Vivek Ramaswamy have offered details on their plans to cut regulations
In an op-ed, the pair emphasize that their goal will be to immediately use executive orders to eliminate regulations issued by federal agencies, using “a lean team of small-government crusaders.” This means AI guidelines issued by federal agencies under the Biden administration, like ethics rules from the National Institute of Standards and Technology or principles in the National Security Memorandum on AI, could be rolled back or eliminated completely. (Wall Street Journal)
How OpenAI tests its models
OpenAI gave us a glimpse into how it selects people to do its testing and how it’s working to automate the testing process by, essentially, having large language models attack each other. (MIT Technology Review)
Every time a new AI model is released, it’s typically touted as acing its performance against a series of benchmarks. OpenAI’s GPT-4o, for example, was launched in May with a compilation of results that showed its performance topping every other AI company’s latest model in several tests.
The problem is that these benchmarks are poorly designed, the results hard to replicate, and the metrics they use are frequently arbitrary, according to new research. That matters because AI models’ scores against these benchmarks will determine the level of scrutiny and regulation they receive.
“It seems to be like the Wild West because we don’t really have good evaluation standards,” says Anka Reuel, an author of the paper, who is a PhD student in computer science at Stanford University and a member of its Center for AI Safety.
A benchmark is essentially a test that an AI takes. It can be in a multiple-choice format like the most popular one, the Massive Multitask Language Understanding benchmark, known as the MMLU, or it could be an evaluation of AI’s ability to do a specific task or the quality of its text responses to a set series of questions.
AI companies frequently cite benchmarks as testament to a new model’s success. “The developers of these models tend to optimize for the specific benchmarks,” says Anna Ivanova, professor of psychology at the Georgia Institute of Technology and head of its Language, Intelligence, and Thought (LIT) lab, who was not involved in the Stanford research.
These benchmarks already form part of some governments’ plans for regulating AI. For example, the EU AI Act, which goes into force in August 2025, references benchmarks as a tool to determine whether or not a model demonstrates “systemic risk”; if it does, it will be subject to higher levels of scrutiny and regulation. The UK AI Safety Institute references benchmarks in Inspect, which is its framework for evaluating the safety of large language models.
But right now, they might not be good enough to use that way. “There’s this potential false sense of safety we’re creating with benchmarks if they aren’t well designed, especially for high-stakes use cases,” says Reuel. “It may look as if the model is safe, but it is not.”
Given the increasing importance of benchmarks, Reuel and her colleagues wanted to look at the most popular examples to figure out what makes a good one—and whether the ones we use are robust enough. The researchers first set out to verify the benchmark results that developers put out, but often they couldn’t reproduce them. To test a benchmark, you typically need some instructions or code to run it on a model. Many benchmark creators didn’t make the code to run their benchmark publicly available. In other cases, the code was outdated.
Benchmark creators often don’t make the questions and answers in their data set publicly available either. If they did, companies could just train their model on the benchmark; it would be like letting a student see the questions and answers on a test before taking it. But that makes them hard to evaluate.
Another issue is that benchmarks are frequently “saturated,” which means all the problems have been pretty much been solved. For example, let’s say there’s a test with simple math problems on it. The first generation of an AI model gets a 20% on the test, failing. The second generation of the model gets 90% and the third generation gets 93%. An outsider may look at these results and determine that AI progress has slowed down, but another interpretation could just be that the benchmark got solved and is no longer that great a measure of progress. It fails to capture the difference in ability between the second and third generations of a model.
One of the goals of the research was to define a list of criteria that make a good benchmark. “It’s definitely an important problem to discuss the quality of the benchmarks, what we want from them, what we need from them,” says Ivanova. “The issue is that there isn’t one good standard to define benchmarks. This paper is an attempt to provide a set of evaluation criteria. That’s very useful.”
The paper was accompanied by the launch of a website, BetterBench, that ranks the most popular AI benchmarks. Rating factors include whether or not experts were consulted on the design, whether the tested capability is well defined, and other basics—for example, is there a feedback channel for the benchmark, or has it been peer-reviewed?
The MMLU benchmark had the lowest ratings. “I disagree with these rankings. In fact, I’m an author of some of the papers ranked highly, and would say that the lower ranked benchmarks are better than them,” says Dan Hendrycks, director of CAIS, the Center for AI Safety, and one of the creators of the MMLU benchmark. That said, Hendrycks still believes that the best way to move the field forward is to build better benchmarks.
Some think the criteria may be missing the bigger picture. “The paper adds something valuable. Implementation criteria and documentation criteria—all of this is important. It makes the benchmarks better,” says Marius Hobbhahn, CEO of Apollo Research, a research organization specializing in AI evaluations. “But for me, the most important question is, do you measure the right thing? You could check all of these boxes, but you could still have a terrible benchmark because it just doesn’t measure the right thing.”
Essentially, even if a benchmark is perfectly designed, one that tests the model’s ability to provide compelling analysis of Shakespeare sonnets may be useless if someone is really concerned about AI’s hacking capabilities.
“You’ll see a benchmark that’s supposed to measure moral reasoning. But what that means isn’t necessarily defined very well. Are people who are experts in that domain being incorporated in the process? Often that isn’t the case,” says Amelia Hardy, another author of the paper and an AI researcher at Stanford University.
There are organizations actively trying to improve the situation. For example, a new benchmark from Epoch AI, a research organization, was designed with input from 60 mathematicians and verified as challenging by two winners of the Fields Medal, which is the most prestigious award in mathematics. The participation of these experts fulfills one of the criteria in the BetterBench assessment. The current most advanced models are able to answer less than 2% of the questions on the benchmark, which means there’s a significant way to go before it is saturated.
“We really tried to represent the full breadth and depth of modern math research,” says Tamay Besiroglu, associate director at Epoch AI. Despite the difficulty of the test, Besiroglu speculates it will take only around four or five years for AI models to score well against it.
And Hendrycks’ organization, CAIS, is collaborating with Scale AI to create a new benchmark that he claims will test AI models against the frontier of human knowledge, dubbed Humanity’s Last Exam, HLE. “HLE was developed by a global team of academics and subject-matter experts,” says Hendrycks. “HLE contains unambiguous, non-searchable, questions that require a PhD-level understanding to solve.” If you want to contribute a question, you can here.
Although there is a lot of disagreement over what exactly should be measured, many researchers agree that more robust benchmarks are needed, especially since they set a direction for companies and are a critical tool for governments.
“Benchmarks need to be really good,” Hardy says. “We need to have an understanding of what ‘really good’ means, which we don’t right now.”
X will credit match the ad spend of Shopify merchants over the next week.