Update April 11, 1:46 am: This article has been updated to include more information and background on the resolution.
US President Donald Trump has signed a joint congressional resolution overturning a Biden administration-era rule that would have required decentralized finance (DeFi) protocols to report transactions to the Internal Revenue Service.
Set to take effect in 2027, the so-called IRS DeFi broker rule would have expanded the tax authority’s existing reporting requirements to include DeFi platforms, requiring them to disclose gross proceeds from crypto sales, including information regarding taxpayers involved in the transactions.
Trump formally killed the measure by signing off the resolution on April 10, marking the first time a crypto bill has ever been signed into law, Representative Mike Carey, who backed the bill, said in a statement.
“The DeFi Broker Rule needlessly hindered American innovation, infringed on the privacy of everyday Americans, and was set to overwhelm the IRS with an overflow of new filings that it doesn’t have the infrastructure to handle during tax season,” he said.
Critics of the rule claimed it would lump decentralized platforms with too onerous rules, hampering innovation in crypto and DeFi.
Supporters, such as Democratic Representative Lloyd Doggett, said in March that killing the IRS rule would create a loophole that wealthy tax cheats would exploit.
The Senate passed the resolution on March 26. It had previously passed its own version of the resolution in early March, but the House made its own due to Constitutional rules about where budget measures should originate.
Trump was widely expected to sign the bill, as White House AI and crypto czar David Sacks said in March that the president supported killing the measure
Industry “can breathe again” with IRS rule repealed
Crypto advocacy group Blockchain Association CEO Kristin Smith said in an April 10 statement the “industry’s innovators, builders, and developers can breathe again,” now the resolution has passed.
“This rule promised an end to the United States crypto industry; it was a sledgehammer to the engine of American innovation,” she added.
Kristin Smith claimed the IRS rule would have destroyed the US crypto industry. Source: Blockchain Association
The lobby group filed a lawsuit in December against the IRS, the Treasury and then-Treasury Secretary Janet Yellen to repeal the IRS rule, claiming it was unlawful and an “unconstitutional overreach.
The Trump administration has taken a friendly attitude toward crypto and has worked to heel the Securities and Exchange Commission, which has wound back its hardline stance toward crypto forged under former Chair Gary Gensler.
The regulator has dismissed a number of enforcement actions and probes against crypto firms that it launched under the Biden administration and has begun a series of industry consultations on how it should regulate crypto.
A growing number of Tesla owners are putting their used vehicles up for sale, as consumers react to Elon Musk’s political activities and the global protests they have fueled. In March, the number of used Tesla vehicles listed for sale on Autotrader.com skyrocketed, Sherwood News reported, citing data from Autotrader parent company Cox Automotive. The […]
Digital payments platform Stripe invites customers to join its management team meetings on a bi-weekly basis so it can get “candid feedback,” according to co-founder Patrick Collison. In an April 8 post on X, the fintech giant’s CEO said the company has a customer join for the first 30 minutes of the meeting, which is […]
Albert Saniger, the founder and former CEO of Nate, an AI shopping app that promised a “universal” checkout experience, was charged with defrauding investors on Wednesday, according to a press release from the U.S. Department of Justice. Founded in 2018, Nate raised over $50 million from investors like Coatue and Forerunner Ventures, most recently raising […]
The International Energy Agency states in a new report that AI could eventually reduce greenhouse-gas emissions, possibly by much more than the boom in energy-guzzling data center development pushes them up.
The finding echoes a point that prominent figures in the AI sector have made as well to justify, at least implicitly, the gigawatts’ worth of electricity demand that new data centers are placing on regional grid systems across the world. Notably, in an essay last year, OpenAI CEO Sam Altman wrote that AI will deliver “astounding triumphs,” such as “fixing the climate,” while offering the world “nearly-limitless intelligence and abundant energy.”
There are reasonable arguments to suggest that AI tools may eventually help reduce emissions, as the IEA report underscores. But what we know for sure is that they’re driving up energy demand and emissions today—especially in the regional pockets where data centers are clustering.
So far, these facilities, which generally run around the clock, are substantially powered through natural-gas turbines, which produce significant levels of planet-warming emissions. Electricity demands are rising so fast that developers are proposing to build new gas plants and convert retired coal plants to supply the buzzy industry.
The other thing we know is that there are better, cleaner ways of powering these facilities already, including geothermal plants, nuclear reactors, hydroelectric power, and wind or solar projects coupled with significant amounts of battery storage. The trade-off is that these facilities may cost more to build or operate, or take longer to get up and running.
There’s something familiar about the suggestion that it’s okay to build data centers that run on fossil fuels today because AI tools will help the world drive down emissions eventually. It recalls the purported promise of carbon credits: that it’s fine for a company to carry on polluting at its headquarters or plants, so long as it’s also funding, say, the planting of trees that will suck up a commensurate level of carbon dioxide.
Unfortunately, we’ve seen again and again that such programs often overstate any climate benefits, doing little to alter the balance of what’s going into or coming out of the atmosphere.
But in the case of what we might call “AI offsets,” the potential to overstate the gains may be greater, because the promised benefits wouldn’t meaningfully accrue for years or decades. Plus, there’s no market or regulatory mechanism to hold the industry accountable if it ends up building huge data centers that drive up emissions but never delivers on these climate claims.
The IEA report outlines instances where industries are already using AI in ways that could help drive down emissions, including detecting methane leaks in oil and gas infrastructure, making power plants and manufacturing facilities more efficient, and reducing energy consumption in buildings.
AI has also shown early promise in materials discovery, helping to speed up the development of novel battery electrolytes. Some hope the technology could deliver advances in solar materials, nuclear power, or other clean energy technologies and improve climate science, extreme weather forecasting, and disaster response, as other studies have noted.
Even without any “breakthrough discoveries,” the IEA estimates, widespread adoption of AI applications could cut emissions by 1.4 billion tons in 2035. Those reductions, “if realized,” would be as much as triple the emissions from data centers by that time, under the IEA’s most optimistic development scenario.
But that’s a very big “if.” It requires placing a lot of faith in technical advances, wide-scale deployments, and payoffs from changes in practices over the next 10 years. And there’s a big gap between how AI could be used and how it will be used, a difference that will depend a lot on economic and regulatory incentives.
Under the Trump administration, there’s little reason to believe that US companies, at least, will face much government pressure to use these tools specifically to drive down emissions. Absent the necessary policy carrots or sticks, it’s arguably more likely that the oil and gas industry will deploy AI to discover new fossil-fuel deposits than to pinpoint methane leaks.
To be clear, the IEA’s figures are a scenario, not a prediction. The authors readily acknowledged that there’s huge uncertainty on this issue, stating: “It is vital to note that there is currently no momentum that could ensure the widespread adoption of these AI applications. Therefore, their aggregate impact, even in 2035, could be marginal if the necessary enabling conditions are not created.”
In other words, we certainly can’t count on AI to drive down emissions more than it drives them up, especially within the time frame now demanded by the dangers of climate change.
As a reminder, it’s already 2025. Rising emissions have now pushed the planet perilously close to fully tipping past 1.5 ˚C of warming, the risks from heatwaves, droughts, sea-level rise and wildfires are climbing—and global climate pollution is still going up.
We are barreling toward midcentury, just 25 years shy of when climate models show that every industry in every nation needs to get pretty close to net-zero emissions to prevent warming from surging past 2 ˚C over preindustrial levels. And yet any new natural-gas plants built today, for data centers or any other purpose, could easily still be running 40 years from now.
Carbon dioxide stays in the atmosphere for hundreds of years. So even if the AI industry does eventually provide ways of cutting more emissions than it produces in a given year, those future reductions won’t cancel out the emissions the sector will pump out along the way—or the warming they produce.
It’s a trade-off we don’t need to make if AI companies, utilities, and regional regulators make wiser choices about how to power the data centers they’re building and running today.
But such efforts should become more the rule than the exception. We no longer have the time or carbon budget to keep cranking up emissions on the promise that we’ll take care of them later.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
How AI can help supercharge creativity
Existing generative tools can automate a striking range of creative tasks and offer near-instant gratification—but at what cost? Some artists and researchers fear that such technology could turn us into passive consumers of yet more AI slop.
And so they are looking for ways to inject human creativity back into the process: working on what’s known as co-creativity or more-than-human creativity. The idea is that AI can be used to inspire or critique creative projects, helping people make things that they would not have made by themselves.
The aim is to develop AI tools that augment our creativity rather than strip it from us—pushing us to be better at composing music, developing games, designing toys, and much more—and lay the groundwork for a future in which humans and machines create things together.
Ultimately, generative models could offer artists and designers a whole new medium, pushing them to make things that couldn’t have been made before, and give everyone creative superpowers. Read the full story.
—Will Douglas Heaven
This story is from the next edition of our print magazine, which is all about creativity. Subscribe now to read it and get a copy of the magazine when it lands!
Tariffs are bad news for batteries
Since Donald Trump announced his plans for sweeping tariffs last week, the vibes have been, in a word, chaotic. Markets have seen one of the quickest drops in the last century, and it’s widely anticipated that the global economic order may be forever changed.
These tariffs could be particularly rough on the battery industry. China dominates the entire supply chain and is subject to monster tariff rates, and even US battery makers won’t escape the effects. Read the full story.
—Casey Crownhart
This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Donald Trump has announced a 90-day tariff pause for some countries He’s decided that all the countries that didn’t retaliate against the severe tariffs would receive a reprieve. (The Guardian) + China, however, is now subject to a whopping 125% tariff. (CNBC) + Chinese sellers on Amazon are preparing to hike their prices in response. (Reuters) + Trump’s advisors have claimed the pivot was always part of the plan. (Vox)
2 DOGE has fired driverless car safety assessors Many of whom were in charge of regulating Tesla, among other companies. (FT $) + The department is being audited by the Government Accountability Office. (Wired $) + Can AI help DOGE slash government budgets? It’s complex. (MIT Technology Review)
3 The cost of a US-made iPhone could rise by 90% Bank of America has crunched the numbers. (Bloomberg $) + Even so, an American-made iPhone could be inferior quality. (WSJ $) + Apple has chartered 600 tons of iPhones to India. (Reuters)
4 The EU wants to build its own AI gigafactories In a bid to catch up with the US and China. (WSJ $)
5 Amazon was forced to cancel its satellite internet launch A rocket carrying a few thousands satellites was unable to take off due to bad weather. (NYT $)
6 America’s air quality is likely to get worse The Trump administration is rolling back the environmental rules that helped lower air pollution. (The Atlantic $) + The world’s next big environmental problem could come from space. (MIT Technology Review)
7 Spammers exploited OpenAI’s tech to blast customized spam The unwanted messages were distributed over four months. (Ars Technica)
8 Chinese social media is filled with memes mocking Trump’s tariffs Featuring finance bros and JD Vance unhappily laboring in factories. (Insider $)
9 Do you have a Fortnite accent? Players of the popular game tend to speak in a highly specific way. (Wired $)
10 An em dash is not a giveaway something has been written by AI Humans use it too—and love it. (WP $) + Not all AI-generated writing is bad. (New Yorker $) + AI-text detection tools are really easy to fool. (MIT Technology Review)
Quote of the day
“Entering a group chat is like leaving your front door unlocked and letting strangers wander in.”
—Author LM Chilton reflects on the innate dangers of trusting that what you say in a group chat stays in the group chat to Wired.
The big story
Digital twins of human organs are here. They’re set to transform medical treatment.
Steven Niederer, a biomedical engineer at the Alan Turing Institute and Imperial College London, has a cardboard box filled with 3D-printed hearts. Each of them is modeled on the real heart of a person with heart failure, but Niederer is more interested in creating detailed replicas of people’s hearts using computers.
These “digital twins” are the same size and shape as the real thing. They work in the same way. But they exist only virtually. Scientists can do virtual surgery on these virtual hearts, figuring out the best course of action for a patient’s condition.
After decades of research, models like these are now entering clinical trials and starting to be used for patient care. The eventual goal is to create digital versions of our bodies—computer copies that could help researchers and doctors figure out our risk of developing various diseases and determine which treatments might work best.
Sometimes Lizzie Wilson shows up to a rave with her AI sidekick.
One weeknight this past February, Wilson plugged her laptop into a projector that threw her screen onto the wall of a low-ceilinged loft space in East London. A small crowd shuffled in the glow of dim pink lights. Wilson sat down and started programming.
Techno clicks and whirs thumped from the venue’s speakers. The audience watched, heads nodding, as Wilson tapped out code line by line on the projected screen—tweaking sounds, looping beats, pulling a face when she messed up.
“It’s kind of boring when you go to watch a show and someone’s just sitting there on their laptop,” she says. “You can enjoy the music, but there’s a performative aspect that’s missing. With live coding, everyone can see what it is that I’m typing. And when I’ve had my laptop crash, people really like that. They start cheering.”
Taking risks is part of the vibe. And so Wilson likes to dial up her performances one more notch by riffing off what she calls a live-coding agent, a generative AI model that comes up with its own beats and loops to add to the mix. Often the model suggests sound combinations that Wilson hadn’t thought of. “You get these elements of surprise,” she says. “You just have to go for it.”
ADELA FESTIVAL
Wilson, a researcher at the Creative Computing Institute at the University of the Arts London, is just one of many working on what’s known as co-creativity or more-than-human creativity. The idea is that AI can be used to inspire or critique creative projects, helping people make things that they would not have made by themselves. She and her colleagues built the live-coding agent to explore how artificial intelligence can be used to support human artistic endeavors—in Wilson’s case, musical improvisation.
It’s a vision that goes beyond the promise of existing generative tools put out by companies like OpenAI and Google DeepMind. Those can automate a striking range of creative tasks and offer near-instant gratification—but at what cost? Some artists and researchers fear that such technology could turn us into passive consumers of yet more AI slop.
And so they are looking for ways to inject human creativity back into the process. The aim is to develop AI tools that augment our creativity rather than strip it from us—pushing us to be better at composing music, developing games, designing toys, and much more—and lay the groundwork for a future in which humans and machines create things together.
Ultimately, generative models could offer artists and designers a whole new medium, pushing them to make things that couldn’t have been made before, and give everyone creative superpowers.
Explosion of creativity
There’s no one way to be creative, but we all do it. We make everything from memes to masterpieces, infant doodles to industrial designs. There’s a mistaken belief, typically among adults, that creativity is something you grow out of. But being creative—whether cooking, singing in the shower, or putting together super-weird TikToks—is still something that most of us do just for the fun of it. It doesn’t have to be high art or a world-changing idea (and yet it can be). Creativity is basic human behavior; it should be celebrated and encouraged.
When generative text-to-image models like Midjourney, OpenAI’s DALL-E, and the popular open-source Stable Diffusion arrived, they sparked an explosion of what looked a lot like creativity. Millions of people were now able to create remarkable images of pretty much anything, in any style, with the click of a button. Text-to-video models came next. Now startups like Udio are developing similar tools for music. Never before have the fruits of creation been within reach of so many.
But for a number of researchers and artists, the hype around these tools has warped the idea of what creativity really is. “If I ask the AI to create something for me, that’s not me being creative,” says Jeba Rezwana, who works on co-creativity at Towson University in Maryland. “It’s a one-shot interaction: You click on it and it generates something and that’s it. You cannot say ‘I like this part, but maybe change something here.’ You cannot have a back-and-forth dialogue.”
Rezwana is referring to the way most generative models are set up. You can give the tools feedback and ask them to have another go. But each new result is generated from scratch, which can make it hard to nail exactly what you want. As the filmmaker Walter Woodman put it last year after his art collective Shy Kids made a short film with OpenAI’s text-to-video model for the first time: “Sora is a slot machine as to what you get back.”
What’s more, the latest versions of some of these generative tools do not even use your submitted prompt as is to produce an image or video (at least not on their default settings). Before a prompt is sent to the model, the software edits it—often by adding dozens of hidden words—to make it more likely that the generated image will appear polished.
“Extra things get added to juice the output,” says Mike Cook, a computational creativity researcher at King’s College London. “Try asking Midjourney to give you a bad drawing of something—it can’t do it.” These tools do not give you what you want; they give you what their designers think you want.
COURTESY OF MIKE COOK
All of which is fine if you just need a quick image and don’t care too much about the details, says Nick Bryan-Kinns, also at the Creative Computing Institute: “Maybe you want to make a Christmas card for your family or a flyer for your community cake sale. These tools are great for that.”
In short, existing generative models have made it easy to create, but they have not made it easy to be creative. And there’s a big difference between the two. For Cook, relying on such tools could in fact harm people’s creative development in the long run. “Although many of these creative AI systems are promoted as making creativity more accessible,” he wrote in a paper published last year, they might instead have “adverse effects on their users in terms of restricting their ability to innovate, ideate, and create.” Given how much generative models have been championed for putting creative abilities at everyone’s fingertips, the suggestion that they might in fact do the opposite is damning.
In the game Disc Room, players navigate a room of moving buzz saws.
Cook used AI to design a new level for the game. The result was a room where none of the discs actually moved.
He’s far from the only researcher worrying about the cognitive impact of these technologies. In February a team at Microsoft Research Cambridge published a report concluding that generative AI tools “can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving.” The researchers found that with the use of generative tools, people’s effort “shifts from task execution to task stewardship.”
Cook is concerned that generative tools don’t let you fail—a crucial part of learning new skills. We have a habit of saying that artists are gifted, says Cook. But the truth is that artists work at their art, developing skills over months and years.
“If you actually talk to artists, they say, ‘Well, I got good by doing it over and over and over,’” he says. “But failure sucks. And we’re always looking at ways to get around that.”
Generative models let us skip the frustration of doing a bad job.
“Unfortunately, we’re removing the one thing that you have to do to develop creative skills for yourself, which is fail,” says Cook. “But absolutely nobody wants to hear that.”
Surprise me
And yet it’s not all bad news. Artists and researchers are buzzing at the ways generative tools could empower creators, pointing them in surprising new directions and steering them away from dead ends. Cook thinks the real promise of AI will be to help us get better at what we want to do rather than doing it for us. For that, he says, we’ll need to create new tools, different from the ones we have now. “Using Midjourney does not do anything for me—it doesn’t change anything about me,” he says. “And I think that’s a wasted opportunity.”
Ask a range of researchers studying creativity to name a key part of the creative process and many will say: reflection. It’s hard to define exactly, but reflection is a particular type of focused, deliberate thinking. It’s what happens when a new idea hits you. Or when an assumption you had turns out to be wrong and you need to rethink your approach. It’s the opposite of a one-shot interaction.
Looking for ways that AI might support or encourage reflection—asking it to throw new ideas into the mix or challenge ideas you already hold—is a common thread across co-creativity research. If generative tools like DALL-E make creation frictionless, the aim here is to add friction back in. “How can we make art without friction?” asks Elisa Giaccardi, who studies design at the Polytechnic University of Milan in Italy. “How can we engage in a truly creative process without material that pushes back?”
Take Wilson’s live-coding agent. She claims that it pushes her musical improvisation in directions she might not have taken by herself. Trained on public code shared by the wider live-coding community, the model suggests snippets of code that are closer to other people’s styles than her own. This makes it more likely to produce something unexpected. “Not because you couldn’t produce it yourself,” she says. “But the way the human brain works, you tend to fall back on repeated ideas.”
Last year, Wilson took part in a study run by Bryan-Kinns and his colleagues in which they surveyed six experienced musicians as they used a variety of generative models to help them compose a piece of music. The researchers wanted to get a sense of what kinds of interactions with the technology were useful and which were not.
The participants all said they liked it when the models made surprising suggestions, even when those were the result of glitches or mistakes. Sometimes the results were simply better. Sometimes the process felt fresh and exciting. But a few people struggled with giving up control. It was hard to direct the models to produce specific results or to repeat results that the musicians had liked. “In some ways it’s the same as being in a band,” says Bryan-Kinns. “You need to have that sense of risk and a sense of surprise, but you don’t want it totally random.”
Alternative designs
Cook comes at surprise from a different angle: He coaxes unexpected insights out of AI tools that he has developed to co-create video games. One of his tools, Puck, which was first released in 2022, generates designs for simple shape-matching puzzle games like Candy Crush or Bejeweled. A lot of Puck’s designs are experimental and clunky—don’t expect it to come up with anything you are ever likely to play. But that’s not the point: Cook uses Puck—and a newer tool called Pixie—to explore what kinds of interactions people might want to have with a co-creative tool.
Pixie can read computer code for a game and tweak certain lines to come up with alternative designs. Not long ago, Cook was working on a copy of a popular game called Disc Room, in which players have to cross a room full of moving buzz saws. He asked Pixie to help him come up with a design for a level that skilled and unskilled players would find equally hard. Pixie designed a room where none of the discs actually moved. Cook laughs: It’s not what he expected. “It basically turned the room into a minefield,” he says. “But I thought it was really interesting. I hadn’t thought of that before.”
Researcher Anne Arzberger developed experimental AI tools to come up with gender-neutral toy designs.
Pushing back on assumptions, or being challenged, is part of the creative process, says Anne Arzberger, a researcher at the Delft University of Technology in the Netherlands. “If I think of the people I’ve collaborated with best, they’re not the ones who just said ‘Yes, great’ to every idea I brought forth,” she says. “They were really critical and had opposing ideas.”
She wants to build tech that provides a similar sounding board. As part of a project called Creating Monsters, Arzberger developed two experimental AI tools that help designers find hidden biases in their designs. “I was interested in ways in which I could use this technology to access information that would otherwise be difficult to access,” she says.
For the project, she and her colleagues looked at the problem of designing toy figures that would be gender neutral. She and her colleagues (including Giaccardi) used Teachable Machine, a web app built by Google researchers in 2017 that makes it easy to train your own machine-learning model to classify different inputs, such as images. They trained this model with a few dozen images that Arzberger had labeled as being masculine, feminine, or gender neutral.
Arzberger then asked the model to identify the genders of new candidate toy designs. She found that quite a few designs were judged to be feminine even when she had tried to make them gender neutral. She felt that her views of the world—her own hidden biases—were being exposed. But the tool was often right: It challenged her assumptions and helped the team improve the designs. The same approach could be used to assess all sorts of design characteristics, she says.
Arzberger then used a second model, a version of a tool made by the generative image and video startup Runway, to come up with gender-neutral toy designs of its own. First the researchers trained the model to generate and classify designs for male- and female-looking toys. They could then ask the tool to find a design that was exactly midway between the male and female designs it had learned.
Generative models can give feedback on designs that human designers might miss by themselves, she says: “We can really learn something.”
Bryan-Kinns is fascinated by how artists and designers find ways to use new technologies. “If you talk to artists, most of them don’t actually talk about these AI generative models as a tool—they talk about them as a material, like an artistic material, like a paint or something,” he says. “It’s a different way of thinking about what the AI is doing.” He highlights the way some people are pushing the technology to do weird things it wasn’t designed to do. Artists often appropriate or misuse these kinds of tools, he says.
Bryan-Kinns points to the work of Terence Broad, another colleague of his at the Creative Computing Institute, as a favorite example. Broad employs techniques like network bending, which involves inserting new layers into a neural network to produce glitchy visual effects in generated images, and generating images with a model trained on no data, which produces almost Rothko-like abstract swabs of color.
But Broad is an extreme case. Bryan-Kinns sums it up like this: “The problem is that you’ve got this gulf between the very commercial generative tools that produce super-high-quality outputs but you’ve got very little control over what they do—and then you’ve got this other end where you’ve got total control over what they’re doing but the barriers to use are high because you need to be somebody who’s comfortable getting under the hood of your computer.”
“That’s a small number of people,” he says. “It’s a very small number of artists.”
Arzberger admits that working with her models was not straightforward. Running them took several hours, and she’s not sure the Runway tool she used is even available anymore. Bryan-Kinns, Arzberger, Cook, and others want to take the kinds of creative interactions they are discovering and build them into tools that can be used by people who aren’t hardcore coders.
Researcher Terence Broad creates dynamic images using a model trained on no data, which produces almost Rothko-like abstract color fields.
Finding the right balance between surprise and control will be hard, though. Midjourney can surprise, but it gives few levers for controlling what it produces beyond your prompt. Some have claimed that writing prompts is itself a creative act. “But no one struggles with a paintbrush the way they struggle with a prompt,” says Cook.
Faced with that struggle, Cook sometimes watches his students just go with the first results a generative tool gives them. “I’m really interested in this idea that we are priming ourselves to accept that whatever comes out of a model is what you asked for,” he says. He is designing an experiment that will vary single words and phrases in similar prompts to test how much of a mismatch people see between what they expect and what they get.
But it’s early days yet. In the meantime, companies developing generative models typically emphasize results over process. “There’s this impressive algorithmic progress, but a lot of the time interaction design is overlooked,” says Rezwana.
For Wilson, the crucial choice in any co-creative relationship is what you do with what you’re given. “You’re having this relationship with the computer that you’re trying to mediate,” she says. “Sometimes it goes wrong, and that’s just part of the creative process.”
When AI gives you lemons—make art. “Wouldn’t it be fun to have something that was completely antagonistic in a performance—like, something that is actively going against you—and you kind of have an argument?” she says. “That would be interesting to watch, at least.”