This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
World leaders are currently in Dubai for the UN COP28 climate talks. As 2023 is set to become the hottest year on record, this year’s meeting is a moment of reckoning for oil and gas companies. There is also renewed focus and enthusiasm on boosting cleantech startups. The stakes could not be higher.
But there’s one thing people aren’t talking enough about, and that’s the carbon footprint of AI. One part of the reason is that big tech companies don’t share the carbon footprint of training and using their massive models, and we don’t have standardized ways of measuring the emissions AI is responsible for. And while we know training AI models is highly polluting, the emissions attributable to using AI have been a missing piece so far. That is, until now.
I just published a story on new research that calculated the real carbon footprint of using generative AI models. Generating one image takes as much energy as fully charging your smartphone, according to the study from researchers at the AI startup Hugging Face and Carnegie Mellon University. This has big implications for the planet, because tech companies are integrating these powerful models into everything from online search to email, and they get used billions of times a day. If you want to know more, you can read the full story here.
Cutting-edge technology doesn’t have to harm the planet, and research like this is very important in helping us get concrete numbers about emissions. It will also help people understand that the cloud we think that AI models live on is actually very tangible, says Sasha Luccioni, an AI researcher at Hugging Face who led the work.
Once we have those numbers, we can start thinking about when using powerful models is actually necessary and when smaller, more nimble models might be more appropriate, she says.
Vijay Gadepally, a research scientist at the MIT Lincoln lab who did not participate in the research, has similar thoughts. Knowing the carbon footprint of each use of AI might make people more thoughtful about the way they use these models, he says.
Luccioni’s research also highlights how the emissions related to using AI will depend on where it’s being used, says Jesse Dodge, a research scientist at the Allen Institute for AI, who was not part of the study. The carbon footprint of AI in places where the power grid is relatively clean, such as France, will be much lower than it is in places with a grid that is heavily reliant on fossil fuels, such as some parts of the US. While the electricity consumed by running AI models is fixed, we might be able to reduce the overall carbon footprint of these models by running them in areas where the power grid consists of more renewable sources, he says.
While climate change is extremely anxiety inducing, it’s vital we better understand the tech sector’s effect on our planet. Studies like this one might help us come up with creative solutions that allow us to reap the benefits of AI while minimizing the harm.
After all, it’s hard to fix something you can’t measure.
Deeper Learning
Google DeepMind’s new AI tool helped create more than 700 new materials
From EV batteries to solar cells to microchips, new materials can supercharge technological breakthroughs. But discovering them usually takes months or even years of trial-and-error research. A new tool from Google DeepMind uses deep learning to dramatically speed up the process of discovering new materials.
What’s the big deal: Called graphical networks for material exploration (GNoME), the technology has already been used to predict structures for 2.2 million new materials, of which more than 700 have gone on to be created in the lab and are now being tested. GNoME can be described as AlphaFold for materials discovery. Thanks to GNoME, the number of known stable materials has grown almost tenfold, to 421,000. Read more from June Kim here.
Bits and Bytes
A high school’s deepfake porn scandal is pushing US lawmakers into action
Legislators are responding quickly after teens used AI to create nonconsensual sexually explicit images. (MIT Technology Review)
He wanted privacy. His college gave him none.
This great investigation shows just how much college students are being subjected to increasing amounts of surveillance tech, including homework trackers, test-taking software, and even license plate readers. (The Markup)
ChatGPT is leaking its secrets
Two new stories show how vulnerable AI chatbots are to leaking data, putting personal and proprietary information at risk. The first story, by Wired, shows how easily OpenAI’s custom ChatGPT bots spill the initial instructions they were given when they were created. Another one, by 404 Media, shows how researchers at Google DeepMind were able to get a chatbot to reveal its data by asking it to repeat specific words over and over.
What it’s like being a prompt engineer earning $200K
A fun story on the people paid six figures to get AI chatbots to do what they say. (The Wall Street Journal)