Scam Sniffer told Cointelegraph it was the first time it’s seen a scam use a “specific combination of fake X accounts, fake Telegram channels and malicious Telegram bots.”
The price of Magic Eden’s freshly airdropped ME token dipped as low as 67% from its post-launch high amid a flurry of complaints from users.
Crypto commentators say there is “not much alpha in chasing alts” right now, but are eyeing the possibility of Bitcoin retesting $99,000.
MARA’s shares closed down 4.4% on the day after announcing it had purchased nearly 11,800 Bitcoin and boosted its hashrate to an industry record.
Chainlink will enable verifiable data transmission and crosschain interoperability for Coinbase’s Project Diamond.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
We saw a demo of the new AI system powering Anduril’s vision for war
—James O’Donnell
One afternoon in late November, I visited a weapons test site in the foothills east of San Clemente, California operated by Anduril, a maker of AI-powered drones and missiles that recently announced a partnership with OpenAI.
I went there to witness a new system it’s expanding today, which allows external parties to tap into its software and share data in order to speed up decision-making on the battlefield.
If it works as planned over the course of a new three-year contract with the Pentagon, it could embed AI more deeply than ever before into the theater of war. Read the full story.
How to use Sora, OpenAI’s new video generating tool
OpenAI has just released its video generation model Sora to the public. The announcement yesterday came on the fifth day of the company’s “shipmas” event, a 12-day marathon of tech releases and demos. Here’s what you should know—and how you can use the video model right now.
—James O’Donnell
This story is the latest in MIT Technology Review’s How To series, which helps you get things done.
AI’s hype and antitrust problem is coming under scrutiny
The AI sector is plagued by a lack of competition and a lot of deceit—or at least that’s one way to interpret the latest flurry of actions taken in Washington.
The actions—from antitrust investigations to accusations of straight-up lying—represent an effort to hold the AI industry’s hype to account in the final months before the Federal Trade Commission’s chair, Lina Khan, is replaced when Donald Trump takes office.
But while the FTC looks to have a far smoother transition of leadership ahead than most other federal agencies, at least some of Trump’s frustrations with Big Tech could send antitrust efforts in a distinctly new direction. Read the full story.
—James O’Donnell
This story is from The Algorithm, our weekly newsletter giving you the inside track on all things happening in the fascinating field of AI. Sign up to receive it in your inbox every Monday.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Google has built a powerful new quantum computing chip
But it doesn’t have any real-world applications—yet. (Bloomberg $)
+ It takes five minutes to solve a problem that a traditional supercomputer could not master in 10 septillion years. (NYT $)
+ It’s a challenge the quantum field has been trying to crack for decades. (The Guardian)
+ We covered the work when it was a preprint in September. (MIT Technology Review)
2 Nvidia is being investigated by China
It claims the chipmaking giant has violated anti-monopoly laws. (BBC)
+ Nvidia’s biggest customer in the country? That would be ByteDance. (Insider $)
+ What’s next in chips. (MIT Technology Review)
3 TikTok has asked a US appeals court to halt the buy-or-sell law
As it stands, the app faces a ban unless it finds a new owner by January 19. (TechCrunch)
4 AI is still failing to deliver on its economic promises
Is 2025 the year we finally start to see some results? (Quartz)
+ The US AI industry is in desperate need of more sites with power grid access. (FT $)
+ How to fine-tune AI for prosperity. (MIT Technology Review)
5 The EU’s competition rules are on the verge of a big shakeup
A new boss means a new approach. (WSJ $)
+ European regulators want to get to the bottom of a Meta and Google investigation. (FT $)
6 Weight-loss drugs are making basic health truths obsolete
A healthy diet and regular exercise is falling by the wayside. (The Atlantic $)
+ Weight-loss injections have taken over the internet. But what does this mean for people IRL? (MIT Technology Review)
7 This bionic leg is controlled by its wearer’s brain
Prosthetic limbs are becoming much more capable. (New Yorker $)
+ These prosthetics break the mold with third thumbs, spikes, and superhero skins. (MIT Technology Review)
8 An AI can make a pretty decent Tokyo travel companion
Just make sure you take its advice with a pinch of salt. (Wired $)
+ How to use AI to plan your next vacation. (MIT Technology Review)
9 Reddit is testing a new AI search feature
Which the site’s users are unlikely to take kindly to. (Ars Technica)
10 Jeff Bezos has a dinner with Donald Trump in his diary
Sounds cozy. (Insider $)
Quote of the day
“It’s like manna from heaven.”
—Ari Morcos, chief executive of startup DatologyAI, explains to the Wall Street Journal why Reddit’s troves of text are so appealing to AI companies.
The big story
Inside the enigmatic minds of animals
October 2022
More than ever, we feel a duty and desire to extend empathy to our nonhuman neighbors. In the last three years, more than 30 countries have formally recognized other animals—including gorillas, lobsters, crows, and octopuses—as sentient beings.
A trio of books from Ed Yong, Jackie Higgins, and Philip Ball detail creatures’ rich inner worlds and capture what has led to these developments: a booming field of experimental research challenging the long-standing view that animals are neither conscious nor cognitively complex. Read the full story.
—Matthew Ponsford
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)
+ It seems we have two types of laugh: one caused by tickling, and the other by everything else.
+ 2024 was a strong year for fiction: check out some of the best new books.
+ There’s something totally mesmerizing about this collection of old home videos.
+ Ukrainian artist Oleg Dron specializes in expansive, haunting landscapes.
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
The AI sector is plagued by a lack of competition and a lot of deceit—or at least that’s one way to interpret the latest flurry of actions taken in Washington.
Last Thursday, Senators Elizabeth Warren and Eric Schmitt introduced a bill aimed at stirring up more competition for Pentagon contracts awarded in AI and cloud computing. Amazon, Microsoft, Google, and Oracle currently dominate those contracts. “The way that the big get bigger in AI is by sucking up everyone else’s data and using it to train and expand their own systems,” Warren told the Washington Post.
The new bill would “require a competitive award process” for contracts, which would ban the use of “no-bid” awards by the Pentagon to companies for cloud services or AI foundation models. (The lawmakers’ move came a day after OpenAI announced that its technology would be deployed on the battlefield for the first time in a partnership with Anduril, completing a year-long reversal of its policy against working with the military.)
While Big Tech is hit with antitrust investigations—including the ongoing lawsuit against Google about its dominance in search, as well as a new investigation opened into Microsoft—regulators are also accusing AI companies of, well, just straight-up lying.
On Tuesday, the Federal Trade Commission took action against the smart-camera company IntelliVision, saying that the company makes false claims about its facial recognition technology. IntelliVision has promoted its AI models, which are used in both home and commercial security camera systems, as operating without gender or racial bias and being trained on millions of images, two claims the FTC says are false. (The company couldn’t support the bias claim and the system was trained on only 100,000 images, the FTC says.)
A week earlier, the FTC made similar claims of deceit against the security giant Evolv, which sells AI-powered security scanning products to stadiums, K-12 schools, and hospitals. Evolv advertises its systems as offering better protection than simple metal detectors, saying they use AI to accurately screen for guns, knives, and other threats while ignoring harmless items. The FTC alleges that Evolv has inflated its accuracy claims, and that its systems failed in consequential cases, such as a 2022 incident when they failed to detect a seven-inch knife that was ultimately used to stab a student.
Those add to the complaints the FTC made back in September against a number of AI companies, including one that sold a tool to generate fake product reviews and one selling “AI lawyer” services.
The actions are somewhat tame. IntelliVision and Evolv have not actually been served fines. The FTC has simply prohibited the companies from making claims that they can’t back up with evidence, and in the case of Evolv, it requires the company to allow certain customers to get out of contracts if they wish to.
However, they do represent an effort to hold the AI industry’s hype to account in the final months before the FTC’s chair, Lina Khan, is likely to be replaced when Donald Trump takes office. Trump has not named a pick for FTC chair, but he said on Thursday that Gail Slater, a tech policy advisor and a former aide to vice president–elect JD Vance, was picked to head the Department of Justice’s Antitrust Division. Trump has signaled that the agency under Slater will keep tech behemoths like Google, Amazon, and Microsoft in the crosshairs.
“Big Tech has run wild for years, stifling competition in our most innovative sector and, as we all know, using its market power to crack down on the rights of so many Americans, as well as those of Little Tech!” Trump said in his announcement of the pick. “I was proud to fight these abuses in my First Term, and our Department of Justice’s antitrust team will continue that work under Gail’s leadership.”
That said, at least some of Trump’s frustrations with Big Tech are different—like his concerns that conservatives could be targets of censorship and bias. And that could send antitrust efforts in a distinctly new direction on his watch.
Now read the rest of The Algorithm
Deeper Learning
The US Department of Defense is investing in deepfake detection
The Pentagon’s Defense Innovation Unit, a tech accelerator within the military, has awarded its first contract for deepfake detection. Hive AI will receive $2.4 million over two years to help detect AI-generated video, image, and audio content.
Why it matters: As hyperrealistic deepfakes get cheaper and easier to produce, they hurt our ability to tell what’s real. The military’s investment in deepfake detection shows that the problem has national security implications as well. The open question is how accurate these detection tools are, and whether they can keep up with the unrelenting pace at which deepfake generation techniques are improving. Read more from Melissa Heikkilä.
Bits and Bytes
The owner of the LA Times plans to add an AI-powered “bias meter” to its news stories
Patrick Soon-Shiong is building a tool that will allow readers to “press a button and get both sides” of a story. But trying to create an AI model that can somehow provide an objective view of news events is controversial, given that models are biased both by their training data and by fine-tuning methods. (Yahoo)
Google DeepMind’s new AI model is the best yet at weather forecasting
It’s the second AI weather model that Google has launched in just the past few months. But this one’s different: It leaves out traditional physics models and relies on AI methods alone. (MIT Technology Review)
How the Ukraine-Russia war is reshaping the tech sector in Eastern Europe
Startups in Latvia and other nearby countries see the mobilization of Ukraine as a warning and an inspiration. They are now changing consumer products—from scooters to recreational drones—for use on the battlefield. (MIT Technology Review)
How Nvidia’s Jensen Huang is avoiding $8 billion in taxes
Jensen Huang runs Nvidia, the world’s top chipmaker and most valuable company. His wealth has soared during the AI boom, and he has taken advantage of a number of tax dodges “that will enable him to pass on much of his fortune tax free,” according to the New York Times. (The New York Times)
Meta is pursuing nuclear energy for its AI ambitions
Meta wants more of its AI training and development to be powered by nuclear energy, joining the ranks of Amazon and Microsoft. The news comes as many companies in Big Tech struggle to meet their sustainability goals amid the soaring energy demands from AI. (Meta)
Correction: A previous version of this article stated that Gail Slater was picked by Donald Trump to be the head of the FTC. Slater was in fact picked to lead the Department of Justice’s Antitrust Division. We apologize for the error.
One afternoon in late November, I visited a weapons test site in the foothills east of San Clemente, California, operated by Anduril, a maker of AI-powered drones and missiles that recently announced a partnership with OpenAI. I went there to witness a new system it’s expanding today, which allows external parties to tap into its software and share data in order to speed up decision-making on the battlefield. If it works as planned over the course of a new three-year contract with the Pentagon, it could embed AI more deeply into the theater of war than ever before.
Near the site’s command center, which looked out over desert scrubs and sage, sat pieces of Anduril’s hardware suite that have helped the company earn its $14 billion valuation. There was Sentry, a security tower of cameras and sensors currently deployed at both US military bases and the US-Mexico border, and advanced radars. Multiple drones, including an eerily quiet model called Ghost, sat ready to be deployed. What I was there to watch, though, was a different kind of weapon, displayed on two large television screens positioned at the test site’s command station.
I was here to examine the pitch being made by Anduril, other companies in defense tech, and growing numbers of people within the Pentagon itself: A future “great power” conflict—military jargon for a global war involving competition between multiple countries—will not be won by the entity with the most advanced drones or firepower, or even the cheapest firepower. It will be won by whoever can sort through and share information the fastest. And that will have to be done “at the edge” where threats arise, not necessarily at a command post in Washington.
A desert drone test
“You’re going to need to really empower lower levels to make decisions, to understand what’s going on, and to fight,” Anduril CEO Brian Schimpf says. “That is a different paradigm than today.” Currently, information flows poorly among people on the battlefield and decision-makers higher up the chain.
To show how the new tech will fix that, Anduril walked me through an exercise demonstrating how its system would take down an incoming drone threatening a base of the US military or its allies (the scenario at the center of Anduril’s new partnership with OpenAI). It began with a truck in the distance, driving toward the base. The AI-powered Sentry tower automatically recognized the object as a possible threat, highlighting it as a dot on one of the screens. Anduril’s software, called Lattice, sent a notification asking the human operator if he would like to send a Ghost drone to monitor. After a click of his mouse, the drone piloted itself autonomously toward the truck, as information on its location gathered by the Sentry was sent to the drone by the software.
The truck disappeared behind some hills, so the Sentry tower camera that was initially trained on it lost contact. But the surveillance drone had already identified it, so its location stayed visible on the screen. We watched as someone in the truck got out and launched a drone, which Lattice again labeled as a threat. It asked the operator if he’d like to send a second attack drone, which then piloted autonomously and locked onto the threatening drone. With one click, it could be instructed to fly into it fast enough to take it down. (We stopped short here, since Anduril isn’t allowed to actually take down drones at this test site.) The entire operation could have been managed by one person with a mouse and computer.
Anduril is building on these capabilities further by expanding Lattice Mesh, a software suite that allows other companies to tap into Anduril’s software and share data, the company announced today. More than 10 companies are now building their hardware into the system—everything from autonomous submarines to self-driving trucks—and Anduril has released a software development kit to help them do so. Military personnel operating hardware can then “publish” their own data to the network and “subscribe” to receive data feeds from other sensors in a secure environment. On December 3, the Pentagon’s Chief Digital and AI Office awarded a three-year contract to Anduril for Mesh.
Anduril’s offering will also join forces with Maven, a program operated by the defense data giant Palantir that fuses information from different sources, like satellites and geolocation data. It’s the project that led Google employees in 2018 to protest against working in warfare. Anduril and Palantir announced on December 6 that the military will be able to use the Maven and Lattice systems together.
The military’s AI ambitions
The aim is to make Anduril’s software indispensable to decision-makers. It also represents a massive expansion of how the military is currently using AI. You might think the US Department of Defense, advanced as it is, would already have this level of hardware connectivity. We have some semblance of it in our daily lives, where phones, smart TVs, laptops, and other devices can talk to each other and share information. But for the most part, the Pentagon is behind.
“There’s so much information in this battle space, particularly with the growth of drones, cameras, and other types of remote sensors, where folks are just sopping up tons of information,” says Zak Kallenborn, a warfare analyst who works with the Center for Strategic and International Studies. Sorting through to find the most important information is a challenge. “There might be something in there, but there’s so much of it that we can’t just set a human down and to deal with it,” he says.
Right now, humans also have to translate between systems made by different manufacturers. One soldier might have to manually rotate a camera to look around a base and see if there’s a drone threat, and then manually send information about that drone to another soldier operating the weapon to take it down. Those instructions might be shared via a low-tech messenger app—one on par with AOL Instant Messenger. That takes time. It’s a problem the Pentagon is attempting to solve through its Joint All-Domain Command and Control plan, among other initiatives.
“For a long time, we’ve known that our military systems don’t interoperate,” says Chris Brose, former staff director of the Senate Armed Services Committee and principal advisor to Senator John McCain, who now works as Anduril’s chief strategy officer. Much of his work has been convincing Congress and the Pentagon that a software problem is just as worthy of a slice of the defense budget as jets and aircraft carriers. (Anduril spent nearly $1.6 million on lobbying last year, according to data from Open Secrets, and has numerous ties with the incoming Trump administration: Anduril founder Palmer Luckey has been a longtime donor and supporter of Trump, and JD Vance spearheaded an investment in Anduril in 2017 when he worked at venture capital firm Revolution.)
Defense hardware also suffers from a connectivity problem. Tom Keane, a senior vice president in Anduril’s connected warfare division, walked me through a simple example from the civilian world. If you receive a text message while your phone is off, you’ll see the message when you turn the phone back on. It’s preserved. “But this functionality, which we don’t even think about,” Keane says, “doesn’t really exist” in the design of many defense hardware systems. Data and communications can be easily lost in challenging military networks. Anduril says its system instead stores data locally.
An AI data treasure trove
The push to build more AI-connected hardware systems in the military could spark one of the largest data collection projects the Pentagon has ever undertaken, and companies like Anduril and Palantir have big plans.
“Exabytes of defense data, indispensable for AI training and inferencing, are currently evaporating,” Anduril said on December 6, when it announced it would be working with Palantir to compile data collected in Lattice, including highly sensitive classified information, to train AI models. Training on a broader collection of data collected by all these sensors will also hugely boost the model-building efforts that Anduril is now doing in a partnership with OpenAI, announced on December 4. Earlier this year, Palantir also offered its AI tools to help the Pentagon reimagine how it categorizes and manages classified data. When Anduril founder Palmer Luckey told me in an interview in October that “it’s not like there’s some wealth of information on classified topics and understanding of weapons systems” to train AI models on, he may have been foreshadowing what Anduril is now building.
Even if some of this data from the military is already being collected, AI will suddenly make it much more useful. “What is new is that the Defense Department now has the capability to use the data in new ways,” Emelia Probasco, a senior fellow at the Center for Security and Emerging Technology at Georgetown University, wrote in an email. “More data and ability to process it could support great accuracy and precision as well as faster information processing.”
The sum of these developments might be that AI models are brought more directly into military decision-making. That idea has brought scrutiny, as when Israel was found last year to have been using advanced AI models to process intelligence data and generate lists of targets. Human Rights Watch wrote in a report that the tools “rely on faulty data and inexact approximations.”
“I think we are already on a path to integrating AI, including generative AI, into the realm of decision-making,” says Probasco, who authored a recent analysis of one such case. She examined a system built within the military in 2023 called Maven Smart System, which allows users to “access sensor data from diverse sources [and] apply computer vision algorithms to help soldiers identify and choose military targets.”
Probasco said that building an AI system to control an entire decision pipeline, possibly without human intervention, “isn’t happening” and that “there are explicit US policies that would prevent it.”
A spokesperson for Anduril said that the purpose of Mesh is not to make decisions. “The Mesh itself is not prescribing actions or making recommendations for battlefield decisions,” the spokesperson said. “Instead, the Mesh is surfacing time-sensitive information”—information that operators will consider as they make those decisions.
Meta continues to add more options to control which performance metrics are displayed on your profile.
TikTok’s still hoping to make in-stream a thing.