Ice Lounge Media

Ice Lounge Media

Why detecting AI-generated text is so difficult (and what to do about it)

IceLoungeMedia IceLoungeMedia

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Last week, OpenAI unveiled a tool that can detect text produced by its AI system ChatGPT. But if you’re a teacher who fears the coming deluge of ChatGPT-generated essays, don’t get the party poppers out yet. 

This tool is OpenAI’s response to the heat it’s gotten from educators, journalists, and others for launching ChatGPT without any ways to detect text it has generated. However, it is still very much a work in progress, and it is woefully unreliable. OpenAI says its AI text detector correctly identifies 26% of AI-written text as “likely AI-written.” 

While OpenAI clearly has a lot more work to do to refine its tool, there’s a limit to just how good it can make it. We’re extremely unlikely to ever get a tool that can spot AI-generated text with 100% certainty. It’s really hard to detect AI-generated text because the whole point of AI language models is to generate fluent and human-seeming text, and the model is mimicking text created by humans, says Muhammad Abdul-Mageed, a professor who oversees research in natural-language processing and machine learning at the University of British Columbia

We are in an arms race to build detection methods that can match the latest, most powerful models, Abdul-Mageed adds. New AI language models are more powerful and better at generating even more fluent language, which quickly makes our existing detection tool kit outdated. 

OpenAI built its detector by creating a whole new AI language model akin to ChatGPT that is specifically trained to detect outputs from models like itself. Although details are sparse, the company apparently trained the model with examples of AI-generated text and examples of human-generated text, and then asked it to spot the AI-generated text. We asked for more information, but OpenAI did not respond. 

Last month, I wrote about another method for detecting text generated by an AI: watermarks. These act as a sort of secret signal in AI-produced text that allows computer programs to detect it as such. 

Researchers at the University of Maryland have developed a neat way of applying watermarks to text generated by AI language models, and they have made it freely available. These watermarks would allow us to tell with almost complete certainty when AI-generated text has been used. 

The trouble is that this method requires AI companies to embed watermarking in their chatbots right from the start. OpenAI is developing these systems but has yet to roll them out in any of its products. Why the delay? One reason might be that it’s not always desirable to have AI-generated text watermarked. 

One of the most promising ways ChatGPT could be integrated into products is as a tool to help people write emails or as an enhanced spell-checker in a word processor. That’s not exactly cheating. But watermarking all AI-generated text would automatically flag these outputs and could lead to wrongful accusations.

The AI text detector that OpenAI rolled out is only one tool among many, and in the future we will likely have to use a combination of them to identify AI-generated text. Another new tool, called GPTZero, measures how random text passages are. AI-generated text uses more of the same words, while people write with more variation. As with diagnoses from doctors, says Abdul-Mageed, when using AI detection tools it’s a good idea to get a second or even a third opinion.

One of the biggest changes ushered in by ChatGPT might be the shift in how we evaluate written text. In the future, maybe students won’t write everything from scratch anymore, and the focus will be on coming up with original thoughts, says Sebastian Raschka, an AI researcher who works at AI startup Lightning.AI. Essays and texts generated by ChatGPT will eventually start resembling each other as the AI system runs out of ideas, because it is constrained by its programming and the data in its training set.

“It will be easier to write correctly, but it won’t be easier to write originally,” Raschka says.

New report: Generative AI in industrial design and engineering

Generative AI—the hottest technology this year—is transforming entire sectors, from journalism and drug design to industrial design and engineering. It’ll be more important than ever for leaders in those industries to stay ahead. We’ve got you covered. A new research report from MIT Technology Review highlights the opportunities—and potential pitfalls— of this new technology for industrial design and engineering. 

The report includes two case studies from leading industrial and engineering companies that are already applying generative AI to their work—and a ton of takeaways and best practices from industry leaders. It is available now for $195.

Deeper Learning

AI models generate copyrighted images and photos of real people

Popular image generation models such as Stable Diffusion can be prompted to produce identifiable photos of real people, potentially threatening their privacy, according to new research. The work also shows that these AI systems can be made to regurgitate exact copies of medical images, as well as copyrighted work by artists. 

Why this matters: The extent to which these AI models memorize and regurgitate images from their databases is at the root of multiple lawsuits between AI companies and artists. This finding could strengthen the artists’ case. Read more from me about this

Leaky AI models: Sadly, in the push to release new models faster, AI developers too often overlook privacy. And it’s not just image-generating systems. AI language models are also extremely leaky, as I found out when I asked GPT-3, ChatGPT’s predecessor, what it knew about me and MIT Technology Review’s editor in chief. The results were hilarious and creepy.  

Bits and Bytes

When my dad was sick, I started Googling grief. Then I couldn’t escape it.
A beautiful piece by my colleague Tate Ryan-Mosley about grief and death, and the pernicious content recommendation algorithms that follow her around the internet only to offer more content on grief and death. Tate spent months asking experts how we can get more control over rogue algorithms. Their answers aren’t all that satisfying. (MIT Technology Review)

Google has invested $300 million into an AI startup 
The tech giant is the latest to hop on the generative-AI bandwagon. It’s poured money into AI startup Anthropic, which is developing language models similar to ChatGPT. The deal gives Google a 10% stake in the company in exchange for the computing power needed to run large AI models. (The Financial Times)

How ChatGPT kicked off an AI race
This is a nice peek behind the scenes at OpenAI and how they decided to launch ChatGPT as a way to gather feedback for the next-generation AI language model, GPT-4. The chatbot’s success has been an “earthshaking surprise” inside OpenAI. (The New York Times

If ChatGPT were a cat
Meet CatGPT. Frankly, the only AI chatbot that matters to me.