Ice Lounge Media

Ice Lounge Media

How machines that can solve complex math problems might usher in more powerful AI

IceLoungeMedia IceLoungeMedia

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

It’s been another big week in AI. Meta updated its powerful new Llama model, which it’s handing out for free, and OpenAI said it is going to trial an AI-powered online search tool that you can chat with, called SearchGPT. 

But the news item that really stood out to me was one that didn’t get as much attention as it should have. It has the potential to usher in more powerful AI and scientific discovery than previously possible. 

Last Thursday, Google DeepMind announced it had built AI systems that can solve complex math problems. The systems—called AlphaProof and AlphaGeometry 2—worked together to successfully solve four out of six problems from this year’s International Mathematical Olympiad, a prestigious competition for high school students. Their performance was the equivalent of winning a silver medal. It’s the first time any AI system has ever achieved such a high success rate on these kinds of problems. My colleague Rhiannon Williams has the news here

Math! I can already imagine your eyes glazing over. But bear with me. This announcement is not just about math. In fact, it signals an exciting new development in the kind of AI we can now build. AI search engines that you can chat with may add to the illusion of intelligence, but systems like Google DeepMind’s could improve the actual intelligence of AI. For that reason, building systems that are better at math has been a goal for many AI labs, such as OpenAI.  

That’s because math is a benchmark for reasoning. To complete these exercises aimed at high school students, the AI system needed to do very complex things like planning to understand and solve abstract problems. The systems were also able to generalize, allowing them to solve a whole range of different problems in various  branches of mathematics. 

“What we’ve seen here is that you can combine [reinforcement learning] that was so successful in things like AlphaGo with large language models and produce something which is extremely capable in the space of text,” David Silver, principal research scientist at Google DeepMind and indisputably a pioneer of deep reinforcement learning, said in a press briefing. In this case, that capability was used to construct programs in the computer language Lean that represent mathematical proofs. He says the International Mathematical Olympiad represents a test for what’s possible and paves the way for further breakthroughs. 

This same recipe could be applied in any situation with really clear, verified reward signals for reinforcement-learning algorithms and an unambiguous way to measure correctness as you can in mathematics, said Silver. One potential application would be coding, for example. 

Now for a compulsory reality check: AlphaProof and AlphaGeometry 2 can still only solve hard high-school-level problems. That’s a long way away from the extremely hard problems top human mathematicians can solve. Google DeepMind stressed that its tool did not, at this point, add anything to the body of mathematical knowledge humans have created. But that wasn’t the point. 

“We are aiming to provide a system that can prove anything,” Silver said. Think of an AI system as reliable as a calculator, for example, that can provide proofs for many challenging problems, or verify tests for computer software or scientific experiments. Or perhaps build better AI tutors that can give feedback on exam results, or fact-check news articles. 

But the thing that excites me most is what Katie Collins, a researcher at the University of Cambridge who specializes in math and AI (and was not involved in the project), told Rhiannon. She says these tools create and evaluate new problems, motivate new people to enter the field, and spark more wonder. That’s something we definitely need more of in this world.


Now read the rest of The Algorithm

Deeper Learning

A new tool for copyright holders can show if their work is in AI training data

Since the beginning of the generative AI boom, content creators have argued that their work has been scraped into AI models without their consent. But until now, it has been difficult to know whether specific text has actually been used in a training data set. Now they have a new way to prove it: “copyright traps.” These are pieces of hidden text that let you mark written content in order to later detect whether it has been used in AI models or not. 

Why this matters: Copyright traps tap into one of the biggest fights in AI. A number of publishers and writers are in the middle of litigation against tech companies, claiming their intellectual property has been scraped into AI training data sets without their permission. The idea is that these traps could help to nudge the balance a little more in the content creators’ favor. Read more from me here

Bits and Bytes

AI trained on AI garbage spits out AI garbage
New research published in Nature shows that the quality of AI models’ output gradually degrades when it’s trained on AI-generated data. As subsequent models produce output that is then used as training data for future models, the effect gets worse. (MIT Technology Review

OpenAI unveils SearchGPT 
The company says it is testing new AI search features that give you fast and timely answers with clear and relevant sources cited. The idea is for the technology to eventually be incorporated into ChatGPT, and CEO Sam Altman says it’ll be possible to do voice searches. However, like many other AI-powered search services, including Google’s, it’s already making errors, as the Atlantic reports. 
(OpenAI

AI video generator Runway trained on thousands of YouTube videos without permission
Leaked documents show that the company was secretly training its generative AI models by scraping thousands of videos from popular YouTube creators and brands, as well as pirated films. (404 media

Meta’s big bet on open-source AI continues
Meta unveiled Llama 3.1 405B, the first frontier-level open-source AI model, which matches state-of-the-art models such as GPT-4 and Gemini in performance. In an accompanying blog post, Mark Zuckerberg renewed his calls for open-source AI to become the industry standard. This would be good for customization, competition, data protection, and efficiency, he argues. It’s also good for Meta, because it leaves competitors with less of an advantage in the AI space. (Facebook