Ice Lounge Media

Ice Lounge Media

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

AI means the end of internet search as we’ve known it

We all know what it means, colloquially, to google something. You pop a few words in a search box and in return get a list of blue links to the most relevant results. Fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in a structured way.

But all that is up for grabs. We are at a new inflection point.

The biggest change to the way search engines deliver information to us since the 1990s is happening right now. No more keyword searching. Instead, you can ask questions in natural language. And instead of links, you’ll increasingly be met with answers written by generative AI and based on live information from across the internet, delivered the same way. 

Not everyone is excited for the change. Publishers are completely freaked out. And people are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Read the full story.

—Mat Honan

This story is from the latest print edition of MIT Technology Review—it’s all about the exciting breakthroughs happening in the world right now. If you don’t already, subscribe to receive future copies.

What’s next for our privacy?

Every day, we are tracked hundreds or even thousands of times across the digital world. All of this is collected, packaged together with other details, and used to create highly personalized profiles that are then shared or sold, often without our explicit knowledge or consent.

A consensus is growing that Americans need better privacy protections—and that the best way to deliver them would be for Congress to pass comprehensive federal privacy legislation.

So what can Americans expect for their personal data in 2025? We spoke to privacy experts and advocates about what’s on their mind regarding how our digital data might be traded or protected moving forward. Read the full story.

—Eileen Guo

This piece is part of MIT Technology Review’s What’s Next series, looking across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

How optimistic are you about AI’s future?

The start of a new year, and maybe especially this one, feels like a good time for a gut check: How optimistic are you feeling about the future of technology? 

Our annual list of 10 Breakthrough Technologies, published on Friday, might help you decide. Artificial intelligence powers four of the breakthroughs featured on the list, and I expect your feelings about them will vary widely. Read the full story.

—James O’Donnell

This story is from the Algorithm, our weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.

The Vera C. Rubin Observatory is ready to transform our understanding of the cosmos

High atop Chile’s 2,700-meter Cerro Pachón, the air is clear and dry, leaving few clouds to block the beautiful view of the stars. It’s here that the Vera C. Rubin Observatory will soon use a car-size 3,200-megapixel digital camera—the largest ever built—to produce a new map of the entire night sky every three days.

Findings from the observatory will help tease apart fundamental mysteries like the nature of dark matter and dark energy, two phenomena that have not been directly observed but affect how objects are bound together—and pushed apart. 

A quarter-­century in the making, the observatory is poised to expand our understanding of just about every corner of the universe.  Read the full story.

—Adam Mann

The Vera C. Rubin Observatory is one of our 10 Breakthrough Technologies for 2025, MIT Technology Review’s annual list of tech to watch. Check out the rest of the list, and cast your vote for the honorary 11th breakthrough—you have until 1 April!

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 A Louisiana man has died of bird flu
He’s the first person known to have died from the virus in the US. (WP $)
+ He was over 65 years old and had underlying health conditions. (NYT $)
+ The risk of a bird flu pandemic is rising. (MIT Technology Review)

2 Meta is shifting towards the right
Appointing Trump ally Dana White to its board is the latest in a string of political moves. (NYT $)
+ Mark Zuckerberg has overhauled Meta’s board in the last five years. (Bloomberg $)
+ The company recently donated $1 million to Trump’s inaugural fund. (WSJ $)

3 The Pentagon is blacklisting China’s biggest EV battery firm
CATL and other companies will be barred from doing business with it. (WP $)
+ The US is convinced they’re working with China’s military. (CNN)

4 Nvidia is working on a ‘personal AI supercomputer’
Project Digits will go on sale in May, priced at a whopping $3,000. (TechCrunch)
+ It’s based on a super secret chip, apparently. (VentureBeat)
+ CEO Jensen Huang has his sights set on humanoid robots, too. (FT $)

5 Doctors are turning to AI for note taking during appointments
It could save them hours each day—if it doesn’t mess up, that is. (FT $)
+ Artificial intelligence is infiltrating health care. We shouldn’t let it make all the decisions. (MIT Technology Review)

6 U-Haul is a treasure trove of personal user data
And hackers are exploiting it to dox or hack their victims. (404 Media)

7 New York drivers are already trying to evade congestion pricing
Subtly obscuring license plates can trick tracking cameras. (New York Post)
+ Reaction to the new charge is decidedly mixed. (NY Mag $)
+ Why EVs are (mostly) set for solid growth in 2025. (MIT Technology Review)

8 Frustrated workers are complaining about their bosses on LinkedIn
Try this at your own risk. (Insider $)

9 Men are notoriously poor at replying to text messages 💬
And their failure to communicate could be making them lonely. (The Atlantic $)

10 You can now play Doom on a captch
What better way to prove you’re not a bot? (Vice)
+ Death to captchas. (MIT Technology Review)

Quote of the day

“We have glitches that need stitches.”

—Tech entrepreneur Mike Johns describes his experience of becoming trapped in a malfunctioning self-driving car to the Guardian, nearly causing him to miss a flight.

The big story

What happens when your prescription drug becomes the center of covid misinformation

September 2021

By the time Joe Rogan mentioned ivermectin as one ingredient in an experimental cocktail he was taking to treat his covid infection, the drug was a meme. In the weeks leading up to the popular podcaster’s revelation, the drug had already become a flashpoint in the covid culture wars.

But Ivermectin isn’t some new or experimental drug: in addition to its use as an anti-parasite treatment for livestock, it’s commonly employed in humans to treat a form of rosacea, among other things. So for those of us who have been using it for years, its sudden infamy was unexpected and unwelcome. Read the full story.

—Abby Ohlheiser

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ RIP the bar cart, we barely knew you.
+ If you’ve ever wondered what happens to your unclaimed luggage, now you’ll finally have an answer.
+ This motorbike-sized tuna is a thing of beauty. 🐟
+ Happy birthday to the one and only Michael Stipe, who turned 65 over the weekend.

Read more

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.

Every day, we are tracked hundreds or even thousands of times across the digital world. Cookies and web trackers capture every website link that we click, while code installed in mobile apps tracks every physical location that our devices—and, by extension, we—have visited. All of this is collected, packaged together with other details (compiled from public records, supermarket member programs, utility companies, and more), and used to create highly personalized profiles that are then shared or sold, often without our explicit knowledge or consent. 

A consensus is growing that Americans need better privacy protections—and that the best way to deliver them would be for Congress to pass comprehensive federal privacy legislation. While the latest iteration of such a bill, the American Privacy Rights Act of 2024, gained more momentum than previously proposed laws, it became so watered down that it lost support from both Republicans and Democrats before it even came to a vote. 

There have been some privacy wins in the form of limits on what data brokers—third-party companies that buy and sell consumers’ personal information for targeted advertisements, messaging, and other purposes—can do with geolocation data. 

These are still small steps, though—and they are happening as increasingly pervasive and powerful technologies collect more data than ever. And at the same time, Washington is preparing for a new presidential administration that has attacked the press and other critics, promised to target immigrants for mass deportation, threatened to seek retribution against perceived enemies, and supported restrictive state abortion laws. This is not even to mention the increased collection of our biometric data, especially for facial recognition, and the normalization of its use in all kinds of ways. In this light, it’s no stretch to say our personal data has arguably never been more vulnerable, and the imperative for privacy has never felt more urgent. 

So what can Americans expect for their personal data in 2025? We spoke to privacy experts and advocates about (some of) what’s on their mind regarding how our digital data might be traded or protected moving forward. 

Reining in a problematic industry

In early December, the Federal Trade Commission announced separate settlement agreements with the data brokers Mobilewalla and Gravy Analytics (and its subsidiary Venntel). Finding that the companies had tracked and sold geolocation data from users at sensitive locations like churches, hospitals, and military installations without explicit consent, the FTC banned the companies from selling such data except in specific circumstances. This follows something of a busy year in regulation of data brokers, including multiple FTC enforcement actions against other companies for similar use and sale of geolocation data, as well as a proposed rule from the Justice Department that would prohibit the sale of bulk data to foreign entities. 

And on the same day that the FTC announced these settlements in December, the Consumer Financial Protection Bureau proposed a new rule that would designate data brokers as consumer reporting agencies, which would trigger stringent reporting requirements and consumer privacy protections. The rule would prohibit the collection and sharing of people’s sensitive information, such as their salaries and Social Security numbers, without “legitimate purposes.” While the rule will still need to undergo a 90-day public comment period, and it’s unclear whether it will move forward under the Trump administration, if it’s finalized it has the power to fundamentally limit how data brokers do business.

Right now, there just aren’t many limits on how these companies operate—nor, for that matter, clear information on how many data brokerages even exist. Industry watchers estimate there may be 4,000 to 5,000 data brokers around the world, many of which we’ve never heard of—and whose names constantly shift. In California alone, the state’s 2024 Data Broker Registry lists 527 such businesses that have voluntarily registered there, nearly 90 of which also self-reported that they collect geolocation data. 

All this data is widely available for purchase by anyone who will pay. Marketers buy data to create highly targeted advertisements, and banks and insurance companies do the same to verify identity, prevent fraud, and conduct risk assessments. Law enforcement buys geolocation data to track people’s whereabouts without getting traditional search warrants. Foreign entities can also currently buy sensitive information on members of the military and other government officials. And on people-finder websites, basically anyone can pay for anyone else’s contact details and personal history.  

Data brokers and their clients defend these transactions by saying that most of this data is anonymized—though it’s questionable whether that can truly be done in the case of geolocation data. Besides, anonymous data can be easily reidentified, especially when it’s combined with other personal information. 

Digital-rights advocates have spent years sounding the alarm on this secretive industry, especially the ways in which it can harm already marginalized communities, though various types of data collection have sparked consternation across the political spectrum. Representative Cathy McMorris Rodgers, the Republican chair of the House Energy and Commerce Committee, for example, was concerned about how the Centers for Disease Control and Prevention bought location data to evaluate the effectiveness of pandemic lockdowns. Then a study from last year showed how easy (and cheap) it was to buy sensitive data about members of the US military; Senator Elizabeth Warren, a Democrat, called out the national security risks of data brokers in a statement to MIT Technology Review, and Senator John Cornyn, a Republican, later said he was “shocked” when he read about the practice in our story. 

But it was the 2022 Supreme Court decision ending the constitutional guarantee of legal abortion that spurred much of the federal action last year. Shortly after the Dobbs ruling, President Biden issued an executive order to protect access to reproductive health care; it included instructions for the FTC to take steps preventing information about visits to doctor’s offices or abortion clinics from being sold to law enforcement agencies or state prosecutors.

The new enforcers

With Donald Trump taking office in January, and Republicans taking control of both houses of Congress, the fate of the CFPB’s proposed rule—and the CFPB itself—is uncertain. Republicans, the people behind Project 2025, and Elon Musk (who will lead the newly created advisory group known as the Department of Government Efficiency) have long been interested in seeing the bureau “deleted,” as Musk put it on X. That would take an act of Congress, making it unlikely, but there are other ways that the administration could severely curtail its powers. Trump is likely to fire the current director and install a Republican who could rescind existing CFPB rules and stop any proposed rules from moving forward. 

Meanwhile, the FTC’s enforcement actions are only as good as the enforcers. FTC decisions do not set legal precedent in quite the same way that court cases do, says Ben Winters, a former Department of Justice official and the director of AI and privacy at the Consumer Federation of America, a network of organizations and agencies focused on consumer protection. Instead, they “require consistent [and] additional enforcement to make the whole industry scared of not having an FTC enforcement action against them.” (It’s also worth noting that these FTC settlements are specifically focused on geolocation data, which is just one of the many types of sensitive data that we regularly give up in order to participate in the digital world.)

Looking ahead, Tiffany Li, a professor at the University of San Francisco School of Law who focuses on AI and privacy law, is worried about “a defanged FTC” that she says would be “less aggressive in taking action against companies.” 

Lina Khan, the current FTC chair, has been the leader of privacy protection action in the US, notes Li, and she’ll soon be leaving. Andrew Ferguson, Trump’s recently named pick to be the next FTC chair, has come out in strong opposition to data brokers: “This type of data—records of a person’s precise physical locations—is inherently intrusive and revealing of people’s most private affairs,” he wrote in a statement on the Mobilewalla decision, indicating that he is likely to continue action against them. (Ferguson has been serving as a commissioner on the FTC since April 20214.) On the other hand, he has spoken out against using FTC actions as an alternative to privacy legislation passed by Congress. And, of course, this brings us right back around to that other major roadblock: Congress has so far failed to pass such laws—and it’s unclear if the next Congress will either. 

Movement in the states

Without federal legislative action, many US states are taking privacy matters into their own hands. 

In 2025, eight new state privacy laws will take effect, making a total of 25 around the country. A number of other states—like Vermont and Massachusetts—are considering passing their own privacy bills next year, and such laws could, in theory, force national legislation, says Woodrow Hartzog, a technology law scholar at Boston University School of Law. “Right now, the statutes are all similar enough that the compliance cost is perhaps expensive but manageable,” he explains. But if one state passed a law that was different enough from the others, a national law could be the only way to resolve the conflict. Additionally, four states—California, Texas, Vermont, and Oregon—already have specific laws regulating data brokers, including the requirement that they register with the state. 

Along with new laws, says Justin Brookman, the director of technology policy at Consumer Reports, comes the possibility that “we can put some more teeth on these laws.” 

Brookman points to Texas, where some of the most aggressive enforcement action at the state level has taken place under its Republican attorney general, Ken Paxton. Even before the state’s new consumer privacy bill went into effect in July, Paxton announced the creation of a special task force focused on enforcing the state’s privacy laws. He has since targeted a number of data brokers—including National Public Data, which exposed millions of sensitive customer records in a data breach in August, as well as companies that sell to them, like Sirius XM. 

At the same time, though, Paxton has moved to enforce the state’s strict abortion laws in ways that threaten individual privacy. In December, he sued a New York doctor for sending abortion pills to a Texas woman through the mail. While the doctor is theoretically protected by New York’s shield laws, which provide a safeguard from out-of-state prosecution, Paxton’s aggressive action makes it even more crucial that states enshrine data privacy protections into their laws, says Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project, an advocacy group. “There is an urgent need for states,” he says, “to lock down our resident’s’ data, barring companies from collecting and sharing information in ways that can be weaponized against them by out-of-state prosecutors.” 

Data collection in the name of “security”

While privacy has become a bipartisan issue, Republicans, in particular, are interested in “addressing data brokers in the context of national security,” such as protecting the data of military members or other government officials, says Winters. But in his view, it’s the effects on reproductive rights and immigrants that are potentially the “most dangerous” threats to privacy. 

Indeed, data brokers (including Venntel, the Gravy Analytics subsidiary named in the recent FTC settlement) have sold cell-phone data to Immigration and Customs Enforcement, as well as to Customs and Border Protection. That data has then been used to track individuals for deportation proceedings—allowing the agencies to bypass local and state sanctuary laws that ban local law enforcement from sharing information for immigration enforcement. 

“The more data that corporations collect, the more data that’s available to governments for surveillance,” warns Ashley Gorski, a senior attorney who works on national security and privacy at the American Civil Liberties Union.

The ACLU is among a number of organizations that have been pushing for the passage of another federal law related to privacy: the Fourth Amendment Is Not For Sale Act. It would close the so-called “data-broker loophole” that allows law enforcement and intelligence agencies to buy personal information from data brokers without a search warrant. The bill would “dramatically limit the ability of the government to buy Americans’ private data,” Gorski says. It was first introduced in 2021 and passed the House in April 2024, with the support of 123 Republicans and 93 Democrats, before stalling in the Senate. 

While Gorski is hopeful that the bill will move forward in the next Congress, others are less sanguine about these prospects—and alarmed about other ways that the incoming administration might “co-opt private systems for surveillance purposes,” as Hartzog puts it. So much of our personal information that is “collected for one purpose,” he says, could “easily be used by the government … to track us.” 

This is especially concerning, adds Winters, given that the next administration has been “very explicit” about wanting to use every tool at its disposal to carry out policies like mass deportations and to exact revenge on perceived enemies. And one possible change, he says, is as simple as loosening the government’s procurement processes to make them more open to emerging technologies, which may have fewer privacy protections. “Right now, it’s annoying to procure anything as a federal agency,” he says, but he expects a more “fast and loose use of commercial tools.” 

“That’s something we’ve [already] seen a lot,” he adds, pointing to “federal, state, and local agencies using the Clearviews of the world”—a reference to the controversial facial recognition company. 

The AI wild card

Underlying all of these debates on potential legislation is the fact that technology companies—especially AI companies—continue to require reams and reams of data, including personal data, to train their machine-learning models. And they’re quickly running out of it. 

This is something of a wild card in any predictions about personal data. Ideally, says Jennifer King, a privacy and data policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence, the shortage would lead to ways for consumers to directly benefit, perhaps financially, from the value of their own data. But it’s more likely that “there will be more industry resistance against some of the proposed comprehensive federal privacy legislation bills,” she says. “Companies benefit from the status quo.” 

The hunt for more and more data may also push companies to change their own privacy policies, says Whitney Merrill, a former FTC official who works on data privacy at Asana. Speaking in a personal capacity, she says that companies “have felt the squeeze in the tech recession that we’re in, with the high interest rates,” and that under those circumstances, “we’ve seen people turn around, change their policies, and try to monetize their data in an AI world”—even if it’s at the expense of user privacy. She points to the $60-million-per-year deal that Reddit struck last year to license its content to Google to help train the company’s AI. 

Earlier this year, the FTC warned companies that it would be “unfair and deceptive” to “surreptitiously” change their privacy policies to allow for the use of user data to train AI. But again, whether or not officials follow up on this depends on those in charge. 

So what will privacy look like in 2025? 

While the recent FTC settlements and the CFPB’s proposed rule represent important steps forward in privacy protection—at least when it comes to geolocation data—Americans’ personal information still remains widely available and vulnerable. 

Rebecca Williams, a senior strategist at the ACLU for privacy and data governance, argues that all of us, as individuals and communities, should take it upon ourselves to do more to protect ourselves and “resist … by opting out” of as much data collection as possible. That means checking privacy settings on accounts and apps, and using encrypted messaging services. 

Cahn, meanwhile, says he’ll “be striving to protect [his] local community, working to enact safeguards to ensure that we live up to our principles and stated commitments.” One example of such safeguards is a proposed New York City ordinance that would ban the sharing of any location data originating from within the city limits. Hartzog says that kind of local activism has already been effective in pushing for city bans on facial recognition. 

“Privacy rights are at risk, but they’re not gone, and it’s not helpful to take an overly pessimistic look right now,” says Li, the USF law professor. “We definitely still have privacy rights, and the more that we continue to fight for these rights, the more we’re going to be able to protect our rights.”

Read more

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The start of a new year, and maybe especially this one, feels like a good time for a gut check: How optimistic are you feeling about the future of technology? 

Our annual list of 10 Breakthrough Technologies, published on Friday, might help you decide. It’s the 24th time we’ve published such a list. But just like our earliest picks (2001’s list featured brain-computer interfaces and ways to track copyrighted content on the internet, by the way), this year’s technologies may come to help society, harm it, or both.

Artificial intelligence powers four of the breakthroughs featured on the list, and I expect your optimism about them will vary widely. Take generative AI search. Now becoming the norm on Google with its AI Overviews, it promises to help sort through the internet’s incomprehensible volume of information to offer better answers for the questions we ask. Along the way, it is upending the model of how content creators get paid, and positioning fallible AI as the arbiter of truth and facts. Read more here

Also making the list is the immense progress in the world of robots, which can now learn faster thanks to AI. This means we will soon have to wrestle with whether we will trust humanoid robots enough to welcome them into our most private spaces, and how we will feel if they are remotely controlled by human beings working abroad. 

The list also features lots of technologies outside the world of AI, which I implore you to read about if only for a reminder of just how much other scientific progress is being made. This year may see advances in studying dark matter with the largest digital camera ever made for astronomy, reducing emissions from cow burps, and preventing HIV with an injection just once every six months. We also detail how technologies that you’ve long heard about—from robotaxis to stem cells—are finally making good on some of their promises.

This year, the cultural gulf between techno-optimists and, well, everyone else is set to widen. The incoming administration will be perhaps the one most shaped by Silicon Valley in recent memory, thanks to Donald Trump’s support from venture capitalists like Marc Andreessen (the author of the Techno-Optimist Manifesto) and his relationship, however recently fraught, with Elon Musk. Those figures have critiqued the Biden administration’s approach to technology as slow, “woke,” and overly cautious—attitudes they have vowed to reverse. 

So as we begin a year of immense change, here’s a small experiment I’d encourage you to do. Think about your level of optimism for technology and what’s driving it. Read our list of breakthroughs. Then see how you’ve shifted. I suspect that, like many people, you’ll find you don’t fit neatly in the camp of either optimists or pessimists. Perhaps that’s where the best progress will be made. 


Now read the rest of The Algorithm

Deeper Learning

The biggest AI flops of 2024

Though AI has remained in the spotlight this year (and even contributed to Nobel Prize–winning research in chemistry), it has not been without its failures. Take a look back over the year’s top AI failures, from chatbots dishing out illegal advice to dodgy AI-generated search results. 

Why it matters: These failures show that there are tons of unanswered questions about the technology, including who will moderate what it produces and how, whether we’re getting too trusting of the answers that chatbots produce, and what we’ll do with the mountain of “AI slop” that is increasingly taking over the internet. Above all, they illustrate the many pitfalls of blindly shoving AI into every product we interact with.

Bits and Bytes

What it’s like being a pedestrian in the world of Waymos 

Tech columnist Geoffrey Fowler finds that Waymo robotaxis regularly fail to stop for him at a crosswalk he uses every day. Though you can sometimes make eye contact with human drivers to gauge whether they’ll stop, Waymos lack that “social intelligence,” Fowler writes. (The Washington Post)

The AI Hype Index

For each print issue, MIT Technology Review publishes an AI Hype Index, a highly subjective take on the latest buzz about AI. See where facial recognition, AI replicas of your personality, and more fall on the index. (MIT Technology Review)

What’s going on at the intersection of AI and spirituality

Modern religious leaders are experimenting with A. just as earlier generations examined radio, television, and the internet. They include Rabbi Josh Fixler, who created “Rabbi Bot,” a chatbot trained on his old sermons. (The New York Times)

Meta has appointed its most prominent Republican to lead its global policy team

Just two weeks ahead of Donald Trump’s inauguration, Meta has announced it will appoint Joel Kaplan, who was White House deputy chief of staff under George W. Bush, to the company’s top policy role. Kaplan will replace Nick Clegg, who has led changes on content and elections policies. (Semafor)

Apple has settled a privacy lawsuit against Siri

The company has agreed to pay $95 million to settle a class action lawsuit alleging that Siri could be activated accidentally and then record private conversations without consent. The news comes after MIT Technology Review reported that Apple was looking into whether it could get rid of the need to use a trigger phrase like “Hey Siri” entirely. (The Washington Post)

Read more
1 12 13 14 15 16 2,528