The top trends of the year from YouTube.
CME FedWatch shows the market is expecting the Federal Reserve to cut rates by 25 basis points this month, which would be the third cut this year.
The Sonic blockchain is a new, separate chain to the Fantom Opera network and users will soon be able to swap their FTM tokens for “S” tokens at a 1:1 ratio.
The Nike-owned company received investment funding from Andreessen Horowitz during the peak of NFT summer in 2021.
Preparation for the launch of the digital euro CBDC continues with an eye toward a potential October 2025 launch decision.
Historically, markets outperform after presidential elections and then stall once the President-elect takes office, data shows.
Generative AI has taken off. Since the introduction of ChatGPT in November 2022, businesses have flocked to large language models (LLMs) and generative AI models looking for solutions to their most complex and labor-intensive problems. The promise that customer service could be turned over to highly trained chat platforms that could recognize a customer’s problem and present user-friendly technical feedback, for example, or that companies could break down and analyze their troves of unstructured data, from videos to PDFs, has fueled massive enterprise interest in the technology.
This hype is moving into production. The share of businesses that use generative AI in at least one business function nearly doubled this year to 65%, according to McKinsey. The vast majority of organizations (91%) expect generative AI applications to increase their productivity, with IT, cybersecurity, marketing, customer service, and product development among the most impacted areas, according to Deloitte.
Yet, difficulty successfully deploying generative AI continues to hamper progress. Companies know that generative AI could transform their businesses—and that failing to adopt will leave them behind—but they are faced with hurdles during implementation. This leaves two-thirds of business leaders dissatisfied with progress on their AI deployments. And while, in Q3 2023, 79% of companies said they planned to deploy generative AI projects in the next year, only 5% reported having use cases in production in May 2024.
“We’re just at the beginning of figuring out how to productize AI deployment and make it cost effective,” says Rowan Trollope, CEO of Redis, a maker of real-time data platforms and AI accelerators. “The cost and complexity of implementing these systems is not straightforward.”
Estimates of the eventual GDP impact of generative AI range from just under $1 trillion to a staggering $4.4 trillion annually, with projected productivity impacts comparable to those of the Internet, robotic automation, and the steam engine. Yet, while the promise of accelerated revenue growth and cost reductions remains, the path to get to these goals is complex and often costly. Companies need to find ways to efficiently build and deploy AI projects with well-understood components at scale, says Trollope.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
What the departing White House chief tech advisor has to say on AI
President Biden’s administration will end within two months, and likely to depart with him is Arati Prabhakar, the top mind for science and technology in his cabinet. She has served as Director of the White House Office of Science and Technology Policy since 2022 and was the first to demonstrate ChatGPT to the president in the Oval Office.
Prabhakar was instrumental in passing the president’s executive order on AI in 2023, which sets guidelines for tech companies to make AI safer and more transparent (though it relies on voluntary participation).
As she prepares for the end of the administration, MIT Technology Review sat down with Prabhakar and asked her to reflect on President Biden’s AI accomplishments, and how the approach to AI risks, immigration policies, the CHIPS Act and more could change under Trump. Read the full story.
—James O’Donnell
This manga publisher is using Anthropic’s AI to translate Japanese comics into English
A Japanese publishing startup is using Anthropic’s flagship large language model Claude to help translate manga into English, allowing the company to churn out a new title for a Western audience in just a few days rather than the 2-3 months it would take a team of humans.
But not everyone is happy about it. The firm has angered a number of manga fans who see the use of AI to translate a celebrated and traditional art-form as one more front in the ongoing battle between tech companies and artists. Read the full story.
—Will Douglas Heaven
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 The US has announced more restrictions on chip exports to China
It’s the third round of crackdowns on the industry in as many years. (Reuters)
+ It’s not just China-based companies that could suffer, either. (WP $)
+ The delayed announcement gave China the chance to stockpile affected chips. (WSJ $)
+ Meanwhile, computer scientists in the West are trying to make peace. (Economist $)
+ What’s next in chips. (MIT Technology Review)
2 Donald Trump’s administration is full of pseudo-influencers
They’re capitalizing on their fame to make big bucks ahead of the inauguration. (WP $)
+ A lot of his cabinet also happen to be billionaires. (NY Mag $)
3 We’re not prepared for a clean energy future
It seems energy authorities keep underestimating how much clean power the world really wants. (Vox)
+ Why artificial intelligence and clean energy need each other. (MIT Technology Review)
4 Ads could start cropping up in ChatGPT
OpenAI is on a revenue drive, and advertising is an obvious cash source. (FT $)
+ Elon Musk is doing all he can to prevent it becoming a for-profit business. (Bloomberg $)
5 Chemistry students in Mexico are being lured into making fentanyl
Cartels are offering young chemists large sums to make the drug even more potent. (NYT $)
+ Deaths from fentanyl are falling—and it looks it’s because of supply changes.(FT $)
+ Anti-opioid groups are cautiously optimistic about Trump’s new tariffs. (The Guardian)
6 BYD isn’t just a EV company these days
It’s carved out an unlikely side gig assembling Apple’s iPads. (WSJ $)
+ BYD has also experimented with shipping for its colossal car consignments. (MIT Technology Review)
7 Our organs age at different rates
And AI is giving us a window into understanding why. (New Scientist $)
+ Aging hits us in our 40s and 60s. But well-being doesn’t have to fall off a cliff. (MIT Technology Review)
8 The unbearable mundanity of home DNA tests
The likelihood of them revealing anything interesting is actually pretty low. (The Guardian)
+ How to… delete your 23andMe data. (MIT Technology Review)
9 This website is full of random, barely-watched home videos
Which one you’ll be served is anyone’s guess. (WP $)
10 Brain rot is the Oxford University dictionary word of the year
Specifically in the context of spending too long looking at nonsense online. (BBC)
Quote of the day
“It’s like trying to prevent a fisherman from catching bigger fish simply by denying him bigger fishing poles. He’ll get there in the end.”
—Meghan Harris, an export control expert at consultancy Beacon Global Strategies, explains the limits of the US government’s plans to curb China’s chipmaking to the Financial Times.
The big story
The quest to build wildfire-resistant homes
April 2023
With each devastating wildfire in the US West, officials consider new methods or regulations that might save homes or lives the next time.
In the parts of California where the hillsides meet human development, and where the state has suffered recurring seasonal fire tragedies, that search for new means of survival has especially high stakes.
Many of these methods are low cost and low tech, but no less truly innovative. In fact, the hardest part to tackle may not be materials engineering, but social change. Read the full story.
—Susie Cagle
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)
+ This Instagram account is a treasure trove of bygone mobile phones.
+ The newly renovated Notre Dame cathedral is really quite something.
+ Bad news: we’re probably not going to find alien life any time soon
+ Think you know grilled cheese? This recipe might make you question everything you know and hold dear.
A Japanese publishing startup is using Anthropic’s flagship large language model Claude to help translate manga into English, allowing the company to churn out a new title for a Western audience in just a few days rather than the two to three months it would take a team of humans.
Orange was founded by Shoko Ugaki, a manga superfan who (according to VP of product Rei Kuroda) has some 10,000 titles in his house. The company now wants more people outside Japan to have access to them. “I hope we can do a great job for our readers,” says Kuroda.
But not everyone is happy. The firm has angered a number of manga fans who see the use of AI to translate a celebrated and traditional art form as one more front in the ongoing battle between tech companies and artists. “However well-intentioned this company might be, I find the idea of using AI to translate manga distasteful and insulting,” says Casey Brienza, a sociologist and author of the book Manga in America: Transnational Book Publishing and the Domestication of Japanese Comics.
Manga is a form of Japanese comic that has been around for more than a century. Hit titles are often translated into other languages and find a large global readership, especially in the US. Some, like Battle Angel Alita or One Piece, are turned into anime (animated versions of the comics) or live-action shows and become blockbuster movies and top Netflix picks. The US manga market was worth around $880 million in 2023 but is expected to reach $3.71 billion by 2030, according to some estimates. “It’s a huge growth market right now,” says Kuroda.
Orange wants a part of that international market. Only around 2% of titles published in Japan make it to the US, says Kuroda. As Orange sees it, the problem is that manga takes human translators too long to translate. By building AI tools to automate most of the tasks involved in translation—including extracting Japanese text from a comic’s panels, translating it into English, generating a new font, pasting the English back into the comic, and checking for mistranslations and typos—it can publish a translated mange title in around one-tenth the time it takes human translators and illustrators working by hand, the company says.
Humans still keep a close eye on the process, says Kuroda: “Honestly, AI makes mistakes. It sometimes misunderstands Japanese. It makes mistakes with artwork. We think humans plus AI is what’s important.”
Superheroes, aliens, cats
Manga is a complex art form. Stories are told via a mix of pictures and words, which can be descriptions or characters’ voices or sound effects, sometimes in speech bubbles and sometimes scrawled across the page. Single sentences can be split across multiple panels.
There are also diverse themes and narratives, says Kuroda: “There’s the student romance, mangas about gangs and murders, superheroes, aliens, cats.” Translations must capture the cultural nuance in each story. “This complexity makes localization work highly challenging,” he says.
Orange often starts with nothing more than the scanned image of a page. Its system first identifies which parts of the page show Japanese text, copies it, and erases the text from each panel. These snippets of text are then combined into whole sentences and passed to the translation module, which not only translates the text into English but keeps track of where on the page each individual snippet comes from. Because Japanese and English have a very different word order, the snippets need to be reordered, and the new English text must be placed on the page in different places from where the Japanese equivalent had come from—all without messing up the sequence of images.
“Generally, the images are the most important part of the story,” says Frederik Schodt, an award-winning manga translator who published his first translation in 1977. “Any language cannot contradict the images, so you can’t take many of the liberties that you might in translating a novel. You can’t rearrange paragraphs or change things around much.”
Orange tried several large language models, including its own, developed in house, before picking Claude 3.5. “We’re always evaluating new models,” says Kuroda. “Right now Claude gives us the most natural tone.”
Claude also has an agent framework that lets several sub-models work together on an overall task. Orange uses this framework to juggle the multiple steps in the translation process.
Orange distributes its translations via an app called Emaqi (a pun on “emaki,” the ancient Japanese illustrated scrolls that are considered a precursor to manga). It also wants to be a translator-for-hire for US publishers.
But Orange has not been welcomed by all US fans. When it showed up at Anime NYC, a US anime convention, this summer, the Japanese-to-English translator Jan Mitsuko Cash tweeted: “A company like Orange has no place at the convention hosting the Manga Awards, which celebrates manga and manga professionals in the industry. If you agree, please encourage @animenyc to ban AI companies from exhibiting or hosting panels.”
Brienza takes the same view. “Work in the culture industries, including translation, which ultimately is about translating human intention, not mere words on a page, can be poorly paid and precarious,” she says. “If this is the way the wind is blowing, I can only grieve for those who will go from making little money to none.”
Some have also called Orange out for cutting corners. “The manga uses stylized text to represent the inner thoughts that the [protagonist] can’t quite voice,” another fan tweeted. “But Orange didn’t pay a redrawer or letterer to replicate it properly. They also just skip over some text entirely.”
Everyone at Orange understands that manga translation is a sensitive issue, says Kuroda: “We believe that human creativity is absolutely irreplaceable, which is why all AI-assisted work is rigorously reviewed, refined, and finalized by a team of people.”
Orange also claims that the authors it has translated are on board with its approach. “I’m genuinely happy with how the English version turned out,” says Kenji Yajima, one of the authors Orange has worked with, referring to the company’s translation of his title Neko Oji: Salaryman reincarnated as a kitten! (see images). “As a manga artist, seeing my work shared in other languages is always exciting. It’s a chance to connect with readers I never imagined reaching before.”
Schodt sees the upside too. He notes that the US is flooded with poor-quality, unofficial fan-made translations. “The number of pirated translations is huge,” he says. “It’s like a parallel universe.”
He thinks using AI to streamline translation is inevitable. “It’s the dream of many companies right now,” he says. “But it will take a huge investment.” He believes that really good translation will require large language models trained specifically on manga: “It’s not something that one small company is going to be able to pull off.”
“Whether this will prove economically feasible right now is anyone’s guess,” says Schodt. “There is a lot of advertising hype going on, but the readers will have the final judgment.”
President Biden’s administration will end within two months, and likely to depart with him is Arati Prabhakar, the top mind for science and technology in his cabinet. She has served as Director of the White House Office of Science and Technology Policy since 2022 and was the first to demonstrate ChatGPT to the president in the Oval Office. Prabhakar was instrumental in passing the president’s executive order on AI in 2023, which sets guidelines for tech companies to make AI safer and more transparent (though it relies on voluntary participation).
The incoming Trump administration has not presented a clear thesis of how it will handle AI, but plenty of people in it will want to see that executive order nullified. Trump said as much in July, endorsing the 2024 Republican Party Platform that says the executive order “hinders AI innovation and imposes Radical Leftwing ideas on the development of this technology.” Venture capitalist Marc Andreessen has said he would support such a move.
However, complicating that narrative will be Elon Musk, who for years has expressed fears about doomsday AI scenarios, and has been supportive of some regulations aiming to promote AI safety.
As she prepares for the end of the administration, I sat down with Prabhakar and asked her to reflect on President Biden’s AI accomplishments, and how AI risks, immigration policies, the CHIPS Act and more could change under Trump.
This conversation has been edited for length and clarity.
Every time a new AI model comes out, there are concerns about how it could be misused. As you think back to what were hypothetical safety concerns just two years ago, which ones have come true?
We identified a whole host of risks when large language models burst on the scene, and the one that has fully manifested in horrific ways is deepfakes and image-based sexual abuse. We’ve worked with our colleagues at the Gender Policy Council to urge industry to step up and take some immediate actions, which some of them are doing. There are a whole host of things that can be done—payment processors could actually make sure people are adhering to their Terms of Use. They don’t want to be supporting [image-based sexual abuse] and they can actually take more steps to make sure that they’re not. There’s legislation pending, but that’s still going to take some time.
Have there been risks that didn’t pan out to be as concerning as you predicted?
At first there was a lot of concern expressed by the AI developers about biological weapons. When people did the serious benchmarking about how much riskier that was compared with someone just doing Google searches, it turns out, there’s a marginally worse risk, but it is marginal. If you haven’t been thinking about how bad actors can do bad things, then the chatbots look incredibly alarming. But you really have to say, compared to what?
For many people, there’s a knee-jerk skepticism about the Department of Defense or police agencies going all in on AI. I’m curious what steps you think those agencies need to take to build trust.
If consumers don’t have confidence that the AI tools they’re interacting with are respecting their privacy, are not embedding bias and discrimination, that they’re not causing safety problems, then all the marvelous possibilities really aren’t going to materialize. Nowhere is that more true than national security and law enforcement.
I’ll give you a great example. Facial recognition technology is an area where there have been horrific, inappropriate uses: take a grainy video from a convenience store and identify a black man who has never even been in that state, who’s then arrested for a crime he didn’t commit. (Editor’s note: Prabhakar is referring to this story). Wrongful arrests based on a really poor use of facial recognition technology, that has got to stop.
In stark contrast to that, when I go through security at the airport now, it takes your picture and compares it to your ID to make sure that you are the person you say you are. That’s a very narrow, specific application that’s matching my image to my ID, and the sign tells me—and I know from our DHS colleagues that this is really the case—that they’re going to delete the image. That’s an efficient, responsible use of that kind of automated technology. Appropriate, respectful, responsible—that’s where we’ve got to go.
Were you surprised at the AI safety bill getting vetoed in California?
I wasn’t. I followed the debate, and I knew that there were strong views on both sides. I think what was expressed, that I think was accurate, by the opponents of that bill, is that it was simply impractical, because it was an expression of desire about how to assess safety, but we actually just don’t know how to do those things. No one knows. It’s not a secret, it’s a mystery.
To me, it really reminds us that while all we want is to know how safe, effective and trustworthy a model is, we actually have very limited capacity to answer those questions. Those are actually very deep research questions, and a great example of the kind of public R&D that now needs to be done at a much deeper level.
Let’s talk about talent. Much of the recent National Security Memorandum on AI was about how to help the right talent come from abroad to the US to work on AI. Do you think we’re handling that in the right way?
It’s a hugely important issue. This is the ultimate American story, that people have come here throughout the centuries to build this country, and it’s as true now in science and technology fields as it’s ever been. We’re living in a different world. I came here as a small child because my parents came here in the early 1960s from India, and in that period, there were very limited opportunities [to emigrate to] many other parts of the world.
One of the good pieces of news is that there is much more opportunity now. The other piece of news is that we do have a very critical strategic competition with the People’s Republic of China, and that makes it more complicated to figure out how to continue to have an open door for people who come seeking America’s advantages, while making sure that we continue to protect critical assets like our intellectual property.
Do you think the divisive debates around immigration, especially around the time of the election, may hurt the US ability to bring the right talent into the country?
Because we’ve been stalled as a country on immigration for so long, what is caught up in that is our ability to deal with immigration for the STEM fields. It’s collateral damage.
Has the CHIPS Act been successful?
I’m a semiconductor person starting back with my graduate work. I was astonished and delighted when, after four decades, we actually decided to do something about the fact that semiconductor manufacturing capability got very dangerously concentrated in just one part of the world [Taiwan]. So it was critically important that, with the President’s leadership, we finally took action. And the work that the Commerce Department has done to get those manufacturing incentives out, I think they’ve done a terrific job.
One of the main beneficiaries so far of the CHIPS Act has been Intel. There’s varying degrees of confidence in whether it is going to deliver on building a domestic chip supply chain in the way that the CHIPS Act intended. Is it risky to put a lot of eggs in one basket for one chip maker?
I think the most important thing I see in terms of the industry with the CHIPS Act is that today we’ve got not just Intel, but TSMC, Samsung, SK Hynix and Micron. These are the five companies whose products and processes are at the most advanced nodes in semiconductor technology. They are all now building in the US. There’s no other part of the world that’s going to have all five of those. An industry is bigger than a company. I think when you look at the aggregate, that’s a signal to me that we’re on a very different track.
You are the President’s chief advisor for science and technology. I want to ask about the cultural authority that science has, or doesn’t have, today. RFK Jr. is the pick for health secretary, and in some ways, he captures a lot of frustration that Americans have about our healthcare system. In other ways, he has many views that can only be described as anti-science. How do you reflect on the authority that science has now?
I think it’s important to recognize that we live in a time when trust in institutions has declined across the board, though trust in science remains relatively high compared with what’s happened in other areas. But it’s very much part of this broader phenomenon, and I think that the scientific community has some roles [to play] here. The fact of the matter is that despite America having the best biomedical research that the world has ever seen, we don’t have robust health outcomes. Three dozen countries have longer life expectancies than America. That’s not okay, and that disconnect between advancing science and changing people’s lives is just not sustainable. The pact that science and technology and R&D makes with the American people is that if we make these public investments, it’s going to improve people’s lives and when that’s not happening, it does erode trust.
Is it fair to say that that gap—between the expertise we have in the US and our poor health outcomes—explains some of the rise in conspiratorial thinking, in the disbelief of science?
It leaves room for that. Then there’s a quite problematic rejection of facts. It’s troubling if you’re a researcher, because you just know that what’s being said is not true. The thing that really bothers me is [that the rejection of facts] changes people’s lives, and it’s extremely dangerous and harmful. Think about if we lost herd immunity for some of the diseases for which we right now have fairly high levels of vaccination. It was an ugly world before we tamed infectious disease with the vaccines that we have.