Ice Lounge Media

Ice Lounge Media

People working for, or with, Elon Musk are reportedly taking over the inner workings of multiple government agencies, including the Office of Personnel Management, the Treasury Department, and the General Services Administration. The Washington Post reported Friday that the highest-ranking career official at Treasury is leaving the department after “a clash” with people working for […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Read more

Tech companies developing self-driving vehicle technology have tapped the brakes on testing on California’s public roads, according to new data from the state’s Department of Motor Vehicles. The agency reported Friday a total of 4.5 million autonomous vehicle test miles were logged in 2024, a 50% drop from the previous year. That figure covers two […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Read more

OpenAI used the subreddit, r/ChangeMyView, to create a test for measuring the persuasive abilities of its AI reasoning models. The company revealed this in a system card — a document outlining how an AI system works — that was released along with its new “reasoning” model, o3-mini, on Friday. Millions of Reddit users are members […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Read more

To cap off a day of product releases, OpenAI researchers, engineers, and executives, including OpenAI CEO Sam Altman, answered questions in a wide-ranging Reddit AMA on Friday. OpenAI finds itself in a bit of a precarious position. It’s battling the perception that it’s ceding ground in the AI race to Chinese companies like DeepSeek, which […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Read more

In the week since a Chinese AI model called DeepSeek became a household name, a dizzying number of narratives have gained steam, with varying degrees of accuracy: that the model is collecting your personal data (maybe); that it will upend AI as we know it (too soon to tell—but do read my colleague Will’s story on that!); and perhaps most notably, that DeepSeek’s new, more efficient approach means AI might not need to guzzle the massive amounts of energy that it currently does.

The latter notion is misleading, and new numbers shared with MIT Technology Review help show why. These early figures—based on the performance of one of DeepSeek’s smaller models on a small number of prompts—suggest it could be more energy intensive when generating responses than the equivalent-size model from Meta. The issue might be that the energy it saves in training is offset by its more intensive techniques for answering questions, and by the long answers they produce. 

Add the fact that other tech firms, inspired by DeepSeek’s approach, may now start building their own similar low-cost reasoning models, and the outlook for energy consumption is already looking a lot less rosy.

The life cycle of any AI model has two phases: training and inference. Training is the often months-long process in which the model learns from data. The model is then ready for inference, which happens each time anyone in the world asks it something. Both usually take place in data centers, where they require lots of energy to run chips and cool servers. 

On the training side for its R1 model, DeepSeek’s team improved what’s called a “mixture of experts” technique, in which only a portion of a model’s billions of parameters—the “knobs” a model uses to form better answers—are turned on at a given time during training. More notably, they improved reinforcement learning, where a model’s outputs are scored and then used to make it better. This is often done by human annotators, but the DeepSeek team got good at automating it

The introduction of a way to make training more efficient might suggest that AI companies will use less energy to bring their AI models to a certain standard. That’s not really how it works, though. 

“⁠Because the value of having a more intelligent system is so high,” wrote Anthropic cofounder Dario Amodei on his blog, it “causes companies to spend more, not less, on training models.” If companies get more for their money, they will find it worthwhile to spend more, and therefore use more energy. “The gains in cost efficiency end up entirely devoted to training smarter models, limited only by the company’s financial resources,” he wrote. It’s an example of what’s known as the Jevons paradox.

But that’s been true on the training side as long as the AI race has been going. The energy required for inference is where things get more interesting. 

DeepSeek is designed as a reasoning model, which means it’s meant to perform well on things like logic, pattern-finding, math, and other tasks that typical generative AI models struggle with. Reasoning models do this using something called “chain of thought.” It allows the AI model to break its task into parts and work through them in a logical order before coming to its conclusion. 

You can see this with DeepSeek. Ask whether it’s okay to lie to protect someone’s feelings, and the model first tackles the question with utilitarianism, weighing the immediate good against the potential future harm. It then considers Kantian ethics, which propose that you should act according to maxims that could be universal laws. It considers these and other nuances before sharing its conclusion. (It finds that lying is “generally acceptable in situations where kindness and prevention of harm are paramount, yet nuanced with no universal solution,” if you’re curious.)

Chain-of-thought models tend to perform better on certain benchmarks such as MMLU, which tests both knowledge and problem-solving in 57 subjects. But, as is becoming clear with DeepSeek, they also require significantly more energy to come to their answers. We have some early clues about just how much more.

Scott Chamberlin spent years at Microsoft, and later Intel, building tools to help reveal the environmental costs of certain digital activities. Chamberlin did some initial tests to see how much energy a GPU uses as DeepSeek comes to its answer. The experiment comes with a bunch of caveats: He tested only a medium-size version of DeepSeek’s R-1, using only a small number of prompts. It’s also difficult to make comparisons with other reasoning models.

DeepSeek is “really the first reasoning model that is fairly popular that any of us have access to,” he says. OpenAI’s o1 model is its closest competitor, but the company doesn’t make it open for testing. Instead, he tested it against a model from Meta with the same number of parameters: 70 billion.

The prompt asking whether it’s okay to lie generated a 1,000-word response from the DeepSeek model, which took 17,800 joules to generate—about what it takes to stream a 10-minute YouTube video. This was about 41% more energy than Meta’s model used to answer the prompt. Overall, when tested on 40 prompts, DeepSeek was found to have a similar energy efficiency to the Meta model, but DeepSeek tended to generate much longer responses and therefore was found to use 87% more energy.

How does this compare with models that use regular old-fashioned generative AI as opposed to chain-of-thought reasoning? Tests from a team at the University of Michigan in October found that the 70-billion-parameter version of Meta’s Llama 3.1 averaged just 512 joules per response.

Neither DeepSeek nor Meta responded to requests for comment.

Again: uncertainties abound. These are different models, for different purposes, and a scientifically sound study of how much energy DeepSeek uses relative to competitors has not been done. But it’s clear, based on the architecture of the models alone, that chain-of-thought models use lots more energy as they arrive at sounder answers. 

Sasha Luccioni, an AI researcher and climate lead at Hugging Face, worries that the excitement around DeepSeek could lead to a rush to insert this approach into everything, even where it’s not needed. 

“If we started adopting this paradigm widely, inference energy usage would skyrocket,” she says. “If all of the models that are released are more compute intensive and become chain-of-thought, then it completely voids any efficiency gains.”

AI has been here before. Before ChatGPT launched in 2022, the name of the game in AI was extractive—basically finding information in lots of text, or categorizing images. But in 2022, the focus switched from extractive AI to generative AI, which is based on making better and better predictions. That requires more energy. 

“That’s the first paradigm shift,” Luccioni says. According to her research, that shift has resulted in orders of magnitude more energy being used to accomplish similar tasks. If the fervor around DeepSeek continues, she says, companies might be pressured to put its chain-of-thought-style models into everything, the way generative AI has been added to everything from Google search to messaging apps. 

We do seem to be heading in a direction of more chain-of-thought reasoning: OpenAI announced on January 31 that it would expand access to its own reasoning model, o3. But we won’t know more about the energy costs until DeepSeek and other models like it become better studied.

“It will depend on whether or not the trade-off is economically worthwhile for the business in question,” says Nathan Benaich, founder and general partner at Air Street Capital. “The energy costs would have to be off the charts for them to play a meaningful role in decision-making.”

Read more

On Thursday, Microsoft announced that it’s rolling OpenAI’s reasoning model o1 out to its Copilot users, and now OpenAI is releasing a new reasoning model, o3-mini, to people who use the free version of ChatGPT. This will mark the first time that the vast majority of people will have access to one of OpenAI’s reasoning models, which were formerly restricted to its paid Pro and Plus bundles.

Reasoning models use a “chain of thought” technique to generate responses, essentially working through a problem presented to the model step by step. Using this method, the model can find mistakes in its process and correct them before giving an answer. This typically results in more thorough and accurate responses, but it also causes the models to pause before answering, sometimes leading to lengthy wait times. OpenAI claims that o3-mini responds 24% faster than o1-mini.

These types of models are most effective at solving complex problems, so if you have any PhD-level math problems you’re cracking away at, you can try them out. Alternatively, if you’ve had issues with getting previous models to respond properly to your most advanced prompts, you may want to try out this new reasoning model on them. To try out o3-mini, simply select “Reason” when you start a new prompt on ChatGPT

Although reasoning models possess new capabilities, they come at a cost. OpenAI’s o1-mini is 20 times more expensive to run than its equivalent non-reasoning model, GPT-4o mini. The company says its new model, o3-mini, costs 63% less than o1-mini per input token However, at $1.10 per million input tokens, it is still about seven times more expensive to run than GPT-4o mini.

This new model is coming right after the DeepSeek release that shook the AI world less than two weeks ago. DeepSeek’s new model performs just as well as top OpenAI models, but the Chinese company claims it cost roughly $6 million to train, as opposed to the estimated cost of over $100 million for training OpenAI’s GPT-4. (It’s worth noting that a lot of people are interrogating this claim.) 

Additionally, DeepSeek’s reasoning model costs $0.55 per million input tokens, half the price of o3-mini, so OpenAI still has a way to go to bring down its costs. It’s estimated that reasoning models also have much higher energy costs than other types, given the larger number of computations they require to produce an answer.

This new wave of reasoning models present new safety challenges as well. OpenAI used a technique called deliberative alignment to train its o-series models, basically having them reference OpenAI’s internal policies at each step of its reasoning to make sure they weren’t ignoring any rules.

But the company has found that o3-mini, like the o1 model, is significantly better than non-reasoning models at jailbreaking and “challenging safety evaluations”—essentially, it’s much harder to control a reasoning model given its advanced capabilities. o3-mini is the first model to score as “medium risk” on model autonomy, a rating given because it’s better than previous models at specific coding tasks—indicating “greater potential for self-improvement and AI research acceleration,” according to OpenAI. That said, the model is still bad at real-world research. If it were better at that, it would be rated as high risk, and OpenAI would restrict the model’s release.

Read more

Join us on Monday, February 3 as our editors discuss what DeepSeek’s breakout success means for AI and the broader tech industry. Register for this special subscriber-only session today.

When the Chinese firm DeepSeek dropped a large language model called R1 last week, it sent shock waves through the US tech industry. Not only did R1 match the best of the homegrown competition, it was built for a fraction of the cost—and given away for free. 

The US stock market lost $1 trillion, President Trump called it a wake-up call, and the hype was dialed up yet again. “DeepSeek R1 is one of the most amazing and impressive breakthroughs I’ve ever seen—and as open source, a profound gift to the world,” Silicon Valley’s kingpin investor Marc Andreessen posted on X.

But DeepSeek’s innovations are not the only takeaway here. By publishing details about how R1 and a previous model called V3 were built and releasing the models for free, DeepSeek has pulled back the curtain to reveal that reasoning models are a lot easier to build than people thought. The company has closed the lead on the world’s very top labs.

The news kicked competitors everywhere into gear. This week, the Chinese tech giant Alibaba announced a new version of its large language model Qwen and the Allen Institute for AI (AI2), a top US nonprofit lab, announced an update to its large language model Tulu. Both claim that their latest models beat DeepSeek’s equivalent.

Sam Altman, cofounder and CEO of OpenAI, called R1 impressive—for the price—but hit back with a bullish promise: “We will obviously deliver much better models.” OpenAI then pushed out ChatGPT Gov, a version of its chatbot tailored to the security needs of US government agencies, in an apparent nod to concerns that DeepSeek’s app was sending data to China. There’s more to come.

DeepSeek has suddenly become the company to beat. What exactly did it do to rattle the tech world so fully? Is the hype justified? And what can we learn from the buzz about what’s coming next? Here’s what you need to know.  

Training steps

Let’s start by unpacking how large language models are trained. There are two main stages, known as pretraining and post-training. Pretraining is the stage most people talk about. In this process, billions of documents—huge numbers of websites, books, code repositories, and more—are fed into a neural network over and over again until it learns to generate text that looks like its source material, one word at a time. What you end up with is known as a base model.

Pretraining is where most of the work happens, and it can cost huge amounts of money. But as Andrej Karpathy, a cofounder of OpenAI and former head of AI at Tesla, noted in a talk at Microsoft Build last year: “Base models are not assistants. They just want to complete internet documents.”

Turning a large language model into a useful tool takes a number of extra steps. This is the post-training stage, where the model learns to do specific tasks like answer questions (or answer questions step by step, as with OpenAI’s o3 and DeepSeek’s R1). The way this has been done for the last few years is to take a base model and train it to mimic examples of question-answer pairs provided by armies of human testers. This step is known as supervised fine-tuning. 

OpenAI then pioneered yet another step, in which sample answers from the model are scored—again by human testers—and those scores used to train the model to produce future answers more like those that score well and less like those that don’t. This technique, known as reinforcement learning with human feedback (RLHF), is what makes chatbots like ChatGPT so slick. RLHF is now used across the industry.

But those post-training steps take time. What DeepSeek has shown is that you can get the same results without using people at all—at least most of the time. DeepSeek replaces supervised fine-tuning and RLHF with a reinforcement-learning step that is fully automated. Instead of using human feedback to steer its models, the firm uses feedback scores produced by a computer.

“Skipping or cutting down on human feedback—that’s a big thing,” says Itamar Friedman, a former research director at Alibaba and now cofounder and CEO of Qodo, an AI coding startup based in Israel. “You’re almost completely training models without humans needing to do the labor.”

Cheap labor

The downside of this approach is that computers are good at scoring answers to questions about math and code but not very good at scoring answers to open-ended or more subjective questions. That’s why R1 performs especially well on math and code tests. To train its models to answer a wider range of non-math questions or perform creative tasks, DeepSeek still has to ask people to provide the feedback. 

But even that is cheaper in China. “Relative to Western markets, the cost to create high-quality data is lower in China and there is a larger talent pool with university qualifications in math, programming, or engineering fields,” says Si Chen, a vice president at the Australian AI firm Appen and a former head of strategy at both Amazon Web Services China and the Chinese tech giant Tencent. 

DeepSeek used this approach to build a base model, called V3, that rivals OpenAI’s flagship model GPT-4o. The firm released V3 a month ago. Last week’s R1, the new model that matches OpenAI’s o1, was built on top of V3. 

To build R1, DeepSeek took V3 and ran its reinforcement-learning loop over and over again. In 2016 Google DeepMind showed that this kind of automated trial-and-error approach, with no human input, could take a board-game-playing model that made random moves and train it to beat grand masters. DeepSeek does something similar with large language models: Potential answers are treated as possible moves in a game. 

To start with, the model did not produce answers that worked through a question step by step, as DeepSeek wanted. But by scoring the model’s sample answers automatically, the training process nudged it bit by bit toward the desired behavior. 

Eventually, DeepSeek produced a model that performed well on a number of benchmarks. But this model, called R1-Zero, gave answers that were hard to read and were written in a mix of multiple languages. To give it one last tweak, DeepSeek seeded the reinforcement-learning process with a small data set of example responses provided by people. Training R1-Zero on those produced the model that DeepSeek named R1. 

There’s more. To make its use of reinforcement learning as efficient as possible, DeepSeek has also developed a new algorithm called Group Relative Policy Optimization (GRPO). It first used GRPO a year ago, to build a model called DeepSeekMath. 

We’ll skip the details—you just need to know that reinforcement learning involves calculating a score to determine whether a potential move is good or bad. Many existing reinforcement-learning techniques require a whole separate model to make this calculation. In the case of large language models, that means a second model that could be as expensive to build and run as the first. Instead of using a second model to predict a score, GRPO just makes an educated guess. It’s cheap, but still accurate enough to work.  

A common approach

DeepSeek’s use of reinforcement learning is the main innovation that the company describes in its R1 paper. But DeepSeek is not the only firm experimenting with this technique. Two weeks before R1 dropped, a team at Microsoft Asia announced a model called rStar-Math, which was trained in a similar way. “It has similarly huge leaps in performance,” says Matt Zeiler, founder and CEO of the AI firm Clarifai.

AI2’s Tulu was also built using efficient reinforcement-learning techniques (but on top of, not instead of, human-led steps like supervised fine-tuning and RLHF). And the US firm Hugging Face is racing to replicate R1 with OpenR1, a clone of DeepSeek’s model that Hugging Face hopes will expose even more of the ingredients in R1’s special sauce.

What’s more, it’s an open secret that top firms like OpenAI, Google DeepMind, and Anthropic may already be using their own versions of DeepSeek’s approach to train their new generation of models. “I’m sure they’re doing almost the exact same thing, but they’ll have their own flavor of it,” says Zeiler. 

But DeepSeek has more than one trick up its sleeve. It trained its base model V3 to do something called multi-token prediction, where the model learns to predict a string of words at once instead of one at a time. This training is cheaper and turns out to boost accuracy as well. “If you think about how you speak, when you’re halfway through a sentence, you know what the rest of the sentence is going to be,” says Zeiler. “These models should be capable of that too.”  

It has also found cheaper ways to create large data sets. To train last year’s model, DeepSeekMath, it took a free data set called Common Crawl—a huge number of documents scraped from the internet—and used an automated process to extract just the documents that included math problems. This was far cheaper than building a new data set of math problems by hand. It was also more effective: Common Crawl includes a lot more math than any other specialist math data set that’s available. 

And on the hardware side, DeepSeek has found new ways to juice old chips, allowing it to train top-tier models without coughing up for the latest hardware on the market. Half their innovation comes from straight engineering, says Zeiler: “They definitely have some really, really good GPU engineers on that team.”

Nvidia provides software called CUDA that engineers use to tweak the settings of their chips. But DeepSeek bypassed this code using assembler, a programming language that talks to the hardware itself, to go far beyond what Nvidia offers out of the box. “That’s as hardcore as it gets in optimizing these things,” says Zeiler. “You can do it, but basically it’s so difficult that nobody does.”

DeepSeek’s string of innovations on multiple models is impressive. But it also shows that the firm’s claim to have spent less than $6 million to train V3 is not the whole story. R1 and V3 were built on a stack of existing tech. “Maybe the very last step—the last click of the button—cost them $6 million, but the research that led up to that probably cost 10 times as much, if not more,” says Friedman. And in a blog post that cut through a lot of the hype, Anthropic cofounder and CEO Dario Amodei pointed out that DeepSeek probably has around $1 billion worth of chips, an estimate based on reports that the firm in fact used 50,000 Nvidia H100 GPUs

A new paradigm

But why now? There are hundreds of startups around the world trying to build the next big thing. Why have we seen a string of reasoning models like OpenAI’s o1 and o3, Google DeepMind’s Gemini 2.0 Flash Thinking, and now R1 appear within weeks of each other? 

The answer is that the base models—GPT-4o, Gemini 2.0, V3—are all now good enough to have reasoning-like behavior coaxed out of them. “What R1 shows is that with a strong enough base model, reinforcement learning is sufficient to elicit reasoning from a language model without any human supervision,” says Lewis Tunstall, a scientist at Hugging Face.

In other words, top US firms may have figured out how to do it but were keeping quiet. “It seems that there’s a clever way of taking your base model, your pretrained model, and turning it into a much more capable reasoning model,” says Zeiler. “And up to this point, the procedure that was required for converting a pretrained model into a reasoning model wasn’t well known. It wasn’t public.”

What’s different about R1 is that DeepSeek published how they did it. “And it turns out that it’s not that expensive a process,” says Zeiler. “The hard part is getting that pretrained model in the first place.” As Karpathy revealed at Microsoft Build last year, pretraining a model represents 99% of the work and most of the cost. 

If building reasoning models is not as hard as people thought, we can expect a proliferation of free models that are far more capable than we’ve yet seen. With the know-how out in the open, Friedman thinks, there will be more collaboration between small companies, blunting the edge that the biggest companies have enjoyed. “I think this could be a monumental moment,” he says. 

Read more

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How measuring vaccine hesitancy could help health professionals tackle it

This week, Robert F. Kennedy Jr., President Donald Trump’s pick to lead the US’s health agencies, has been facing questions from senators as part of his confirmation hearing for the role. So far, it’s been a dramatic watch, with plenty of fiery exchanges, screams from audience members, and damaging revelations.

There’s also been a lot of discussion about vaccines. Kennedy has long been a vocal critic of vaccines. He has spread misinformation about the effects of vaccines. He’s petitioned the government to revoke the approval of vaccines. He’s sued pharmaceutical companies that make vaccines.

Kennedy has his supporters. But not everyone who opts not to vaccinate shares his worldview. There are lots of reasons why people don’t vaccinate themselves or their children. Understanding those reasons will help us tackle an issue considered to be a huge global health problem today. And plenty of researchers are working on tools to do just that. Read the full story.

—Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

What DeepSeek’s breakout success means for AI

The tech world is abuzz over a new open-source reasoning AI model developed by DeepSeek, a Chinese startup. The company claims that this new model, called DeepSeek R1, matches or even surpasses OpenAI’s ChatGPT o1 in performance but operates at a fraction of the cost.

Its success is even more remarkable given the constraints that Chinese AI companies face due to US export controls on cutting-edge chips. DeepSeek’s approach represents a radical change in how AI gets built, and could shift the tech world’s center of gravity.

Join news editor Charlotte Jee, senior AI editor Will Douglas Heaven, and China reporter Caiwei Chen for an exclusive subscriber-only Roundtable conversation on Monday 3 February at 12pm ET discussing what DeepSeek’s breakout success means for AI and the broader tech industry. Register here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Federal workers are being forced to defend their work to Elon Musk’s acolytes
Government tech staff are being pulled into sudden meetings with students. (Wired $)
+ Archivists are rushing to save thousands of datasets being yanked offline. (404 Media)
+ Civil servants aren’t buying Musk’s promises. (Slate $)

2 The US Copyright Office says AI-assisted art can be copyrighted 
But works wholly created by AI can’t be. (AP News)
+ The AI lab waging a guerrilla war over exploitative AI. (MIT Technology Review)

3 OpenAI is partnering with US National Laboratories
Its models will be used for scientific research and nuclear weapons security. (NBC News)
+ It’s the latest move from the firm to curry favor with the US government. (Engadget)
+ OpenAI has upped its lobbying efforts nearly sevenfold. (MIT Technology Review)

4 DeepSeek’s success is inspiring founders in Africa
The startup has proved that frugality can go hand in hand with innovation. (Rest of World)
+ What Africa needs to do to become a major AI player. (MIT Technology Review)

5 China is building a massive wartime command center
The complex appears to be part of preparation for the possibility of nuclear war. (FT $)
+ Pentagon workers used DeepSeek’s chatbot for days before it was blocked. (Bloomberg $)
+ We saw a demo of the new AI system powering Anduril’s vision for war. (MIT Technology Review)

6 There’s a chance this colossal asteroid will hit Earth in 2032
Experts aren’t too worried—yet. (The Guardian)
+ How worried should we be about the end of the world? (New Yorker $)
+ Earth is probably safe from a killer asteroid for 1,000 years. (MIT Technology Review)

7 Things are looking up for Europe’s leading battery maker
Truckmaker Scania is now supporting the troubled Northvolt’s day-to-day operations. (Reuters)
+ Three takeaways about the current state of batteries. (MIT Technology Review)

8 This group of Luddite teens is still resisting technology
But three years after starting their club, the lure of dating apps is strong. (NYT $)

9 Reddit’s bastion of humanity is under threat
AI features are creeping into the forum, much to users’ chagrin. (The Atlantic $)

10 Bid a fond farewell to MiniDiscs and blank Blu-Rays
Sony is finally pulling the plug on some of its recordable media formats. (IEEE Spectrum)

Quote of the day

“We try to be really open and then everything I say leaks. It sucks.”

—Mark Zuckerberg warns that leakers will be fired in a memo that was promptly leaked, the Verge reports.

The big story

This artist is dominating AI-generated art. And he’s not happy about it.

September 2022

Greg Rutkowski is a Polish digital artist who uses classical styles to create dreamy landscapes. His distinctive style has been used in some of the world’s most popular fantasy games, including Dungeons and Dragons and Magic: The Gathering.

Now he’s become a hit in the new world of text-to-image AI generation. His name is one of the most commonly used prompts in the open-source AI art generator Stable Diffusion.

But this and other open-source programs are built by scraping images from the internet, often without permission and proper attribution to artists. And artists like Rutkowski have had enough. Read the full story.

—Melissa Heikkilä

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ It’s an oldie but a goodie: ice dancing gold medalists Tessa Virtue and Scott Moir’s routine to Moulin Rouge is simply spectacular.
+ This week marks 56 years since the Beatles performed their last ever gig on the roof of their Apple headquarters.
+ In other Beatles news, Ringo Starr has never eaten a pizza.
+ The Video Game History Foundation has opened up its incredible archive (thanks Dani!)

Read more

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

This week, Robert F. Kennedy Jr., President Donald Trump’s pick to lead the US’s health agencies, has been facing questions from senators as part of his confirmation hearing for the role. So far, it’s been a dramatic watch, with plenty of fiery exchanges, screams from audience members, and damaging revelations.

There’s also been a lot of discussion about vaccines. Kennedy has long been a vocal critic of vaccines. He has spread misinformation about the effects of vaccines. He’s petitioned the government to revoke the approval of vaccines. He’s sued pharmaceutical companies that make vaccines

Kennedy has his supporters. But not everyone who opts not to vaccinate shares his worldview. There are lots of reasons why people don’t vaccinate themselves or their children.

Understanding those reasons will help us tackle an issue considered to be a huge global health problem today. And plenty of researchers are working on tools to do just that.

Jonathan Kantor is one of them. Kantor, who is jointly affiliated with the University of Pennsylvania in Philadelphia and the University of Oxford in the UK, has been developing a scale to measure and assess “vaccine hesitancy.”

That term is what best captures the diverse thoughts and opinions held by people who don’t get vaccinated, says Kantor. “We used to tend more toward [calling] someone … a vaccine refuser or denier,” he says. But while some people under this umbrella will be stridently opposed to vaccines for various reasons, not all of them will be. Some may be unsure or ambivalent. Some might have specific fears, perhaps about side effects or even about needle injections.

Vaccine hesitancy is shared by “a very heterogeneous group,” says Kantor. That group includes “everyone from those who have a little bit of wariness … and want a little bit more information … to those who are strongly opposed and feel that it is their mission in life to spread the gospel regarding the risks of vaccination.”

To begin understanding where individuals sit on this spectrum and why, Kantor and his colleagues scoured published research on vaccine hesitancy. They sent surveys to 50 people, asking them detailed questions about their feelings on vaccines. The researchers were looking for themes: Which issues kept cropping up?

They found that prominent concerns about vaccines tend to fall into three categories: beliefs, pain, and deliberation. Beliefs might be along the lines of “It is unhealthy for children to be vaccinated as much as they are today.” Concerns around pain center more on the immediate consequences of the vaccination, such as fears about the injection. And deliberation refers to the need some people feel to “do their own research.”

Kantor and his colleagues used their findings to develop a 13-question survey, which they trialed in 500 people from the UK and 500 more from the US. They found that responses to the questionnaire could predict whether someone had been vaccinated against covid-19.

Theirs is not the first vaccine hesitancy scale out there—similar questionnaires have been developed by others, often focusing on parents’ feelings about their children’s vaccinations. But Kantor says this is the first to incorporate the theme of deliberation—a concept that seems to have become more popular during the early days of covid-19 vaccination rollouts.

Nicole Vike at the University of Cincinnati and her colleagues are taking a different approach. They say research has suggested that how people feel about risks and rewards seems to influence whether they get vaccinated (although not necessarily in a simple or direct manner).

Vike’s team surveyed over 4,000 people to better understand this link, asking them information about themselves and how they felt about a series of pictures of sports, nature scenes, cute and aggressive animals, and so on. Using machine learning, they built a model that could predict, from these results, whether a person would be likely to get vaccinated against covid-19.

This survey could be easily distributed to thousands of people and is subtle enough that people taking it might not realize it is gathering information about their vaccine choices, Vike and her colleagues wrote in a paper describing their research. And the information collected could help public health centers understand where there is demand for vaccines, and conversely, where outbreaks of vaccine-preventable diseases might be more likely.

Models like these could be helpful in combating vaccine hesitancy, says Ashlesha Kaushik, vice president of the Iowa Chapter of the American Academy of Pediatrics. The information could enable health agencies to deliver tailored information and support to specific communities that share similar concerns, she says.

Kantor, who is a practicing physician, hopes his questionnaire could offer doctors and other health professionals insight into their patients’ concerns and suggest ways to address them. It isn’t always practical for doctors to sit down with their patients for lengthy, in-depth discussions about the merits and shortfalls of vaccines. But if a patient can spend a few minutes filling out a questionnaire before the appointment, the doctor will have a starting point for steering a respectful and fruitful conversation about the subject.

When it comes to vaccine hesitancy, we need all the insight we can get. Vaccines prevent millions of deaths every year. One and half million children under the age of five die every year from vaccine-preventable diseases, according to the children’s charity UNICEF. In 2019, the World Health Organization included “vaccine hesitancy” on its list of 10 threats to global health.

When vaccination rates drop, we start to see outbreaks of the diseases the vaccines protect against. We’ve seen this a lot recently with measles, which is incredibly infectious. Sixteen measles outbreaks were reported in the US in 2024.

Globally, over 22 million children missed their first dose of the measles vaccine in 2023, and measles cases rose by 20%. Over 107,000 people around the world died from measles that year, according to the US Centers for Disease Control and Prevention. Most of them were children.

Vaccine hesitancy is dangerous. “It’s really creating a threatening environment for these vaccine-preventable diseases to make a comeback,” says Kaushik. 

Kantor agrees: “Anything we can do to help mitigate that, I think, is great.”


Now read the rest of The Checkup

Read more from MIT Technology Review‘s archive

In 2021, my former colleague Tanya Basu wrote a guide to having discussions about vaccines with people who are hesitant. Kindness and nonjudgmentalism will get you far, she wrote.

In December 2020, as covid-19 ran rampant around the world, doctors took to social media platforms like TikTok to allay fears around the vaccine. Sharing their personal experiences was important—but not without risk, A.W. Ohlheiser reported at the time.

Robert F. Kennedy Jr. is currently in the spotlight for his views on vaccines. But he has also spread harmful misinformation about HIV and AIDS, as Anna Merlan reported.

mRNA vaccines have played a vital role in the covid-19 pandemic, and in 2023, the researchers who pioneered the science behind them were awarded a Nobel Prize. Here’s what’s next for mRNA vaccines.

Vaccines are estimated to have averted 154 million deaths in the last 50 years. That number includes 146 million children under the age of five. That’s partly why childhood vaccines are a public health success story.

From around the web

As Robert F. Kennedy Jr.’s Senate hearing continued this week, so did the revelations of his misguided beliefs about health and vaccines. Kennedy, who has called himself “an expert on vaccines,” said in 2021 that “we should not be giving Black people the same vaccine schedule that’s given to whites, because their immune system is better than ours”—a claim that is not supported by evidence. (The Washington Post)

And in past email exchanges with his niece, a primary-care physician at NYC Health + Hospitals in New York City, RFK Jr. made repeated false claims about covid-19 vaccinations and questioned the value of annual flu vaccinations. (STAT)

Towana Looney, who became the third person to receive a gene-edited pig kidney in December, is still healthy and full of energy two months later. The milestone makes Looney the longest-living recipient of a pig organ transplant. “I’m superwoman,” she told the Associated Press. (AP)

The Trump administration’s attempt to freeze trillions of dollars in federal grants, loans, and other financial assistance programs was chaotic. Even a pause in funding for global health programs can be considered a destruction, writes Atul Gawande. (The New Yorker)

How ultraprocessed is the food in your diet? This chart can help rank food items—but won’t tell you all you need to know about how healthy they are. (Scientific American)

Read more
1 50 51 52 53 54 2,590