As AI models become better at mimicking human behavior, it’s becoming increasingly difficult to distinguish between real human internet users and sophisticated systems imitating them.
That’s a real problem when those systems are deployed for nefarious ends like spreading misinformation or conducting fraud, and it makes it a lot harder to trust what you encounter online.
A group of 32 researchers from institutions including OpenAI, Microsoft, MIT, and Harvard has developed a potential solution—a verification concept called “personhood credentials.” These credentials prove that their holder is a real person, without revealing any further information about the person’s identity. The team explored the idea in a non-peer-reviewed paper posted to the arXiv preprint server earlier this month.
Personhood credentials rely on the fact that AI systems still cannot bypass state-of-the-art cryptographic systems or pass as people in the offline, real world.
To request such credentials, people would have to physically go to one of a number of issuers, like a government or some other kind of trusted organization. They would be asked to provide evidence of being a real human, such as a passport or biometric data. Once approved, they’d receive a single credential to store on their devices the way it’s currently possible to store credit and debit cards in smartphones’ wallet apps.
To use these credentials online, a user could present them to a third-party digital service provider, which could then verify them using a cryptographic protocol called a zero-knowledge proof. That would confirm the holder was in possession of a personhood credential without disclosing any further unnecessary information.
The ability to filter out anyone other than verified humans on a platform could be useful in many ways. People could reject Tinder matches that don’t come with personhood credentials, for example, or choose not to see anything on social media that wasn’t definitely posted by a person.
The authors want to encourage governments, companies, and standards bodies to consider adopting such a system in the future to prevent AI deception from ballooning out of our control.
“AI is everywhere. There will be many issues, many problems, and many solutions,” says Tobin South, a PhD student at MIT who worked on the project. “Our goal is not to prescribe this to the world, but to open the conversation about why we need this and how it could be done.”
Possible technical options already exist. For example, a network called Idena claims to be the first blockchain proof-of-person system. It works by getting humans to solve puzzles that would be difficult for bots within a short time frame. The controversial Worldcoin program, which collects users’ biometric data, bills itself as the world’s largest privacy-preserving human identity and financial network. It recently partnered with the Malaysian government to provide proof of humanness online by scanning users’ irises, which creates a code. As in the concept of personhood credentials, each code is protected using cryptography.
However, the project has been criticized for using deceptive marketing practices, collecting more personal data than acknowledged, and failing to obtain meaningful consent from users. Regulators in Hong Kong and Spain banned Worldcoin from operating earlier this year, while its operations have been suspended in countries including Brazil, Kenya, and India.
So fresh concepts are still needed. The rapid rise of accessible AI tools has ushered in a dangerous period in which internet users are hyper-suspicious about what is and isn’t true online, says Henry Ajder, an expert on AI and deepfakes who is an advisor to Meta and the UK government. And while ideas for verifying personhood have been around for some time, these credentials feel like one of the most substantive ideas for how to push back against encroaching skepticism, he says.
But the biggest challenge the credentials will face is getting enough platforms, digital services, and governments to adopt them, since they may feel uncomfortable conforming to a standard they don’t control. “For this to work effectively, it would have to be something which is universally adopted,” he says. “In principle the technology is quite compelling, but in practice and the messy world of humans and institutions, I think there would be quite a lot of resistance.”
Martin Tschammer, head of security at the startup Synthesia, which creates AI-generated hyperrealistic deepfakes, says he agrees with the principle driving personhood credentials: the need to verify humans online. However, he is unsure whether it’s the right solution or whether it would be practical to implement. He also expresses skepticism over who would run such a scheme.
“We may end up in a world in which we centralize even more power and concentrate decision-making over our digital lives, giving large internet platforms even more ownership over who can exist online and for what purpose,” he says. “And given the lackluster performance of some governments in adopting digital services, and autocratic tendencies that are on the rise, is it practical or realistic to expect this type of technology to be adopted en masse and in a responsible way by the end of this decade?”
Rather than waiting for collaboration across industries, Synthesia is currently evaluating how to integrate other personhood-proving mechanisms into its products. He says it already has several measures in place. For example, it requires businesses to prove that they are legitimate registered companies, and will ban and refuse refunds to customers found to have broken its rules.
One thing is clear: We are in urgent need of ways to differentiate humans from bots, and encouraging discussions between stakeholders in the tech and policy worlds is a step in the right direction, says Emilio Ferrara, a professor of computer science at the University of Southern California, who was not involved in the project.
“We’re not far from a future where, if things remain unchecked, we’re going to be essentially unable to tell apart interactions that we have online with other humans or some kind of bots. Something has to be done,” he says. “We can’t be naïve as previous generations were with technologies.”