Ice Lounge Media

Ice Lounge Media

Business applications powered by AI are revolutionizing customer experiences, accelerating the speed of business, and driving employee productivity. In fact, according to research firm Frost & Sullivan’s 2024 Global State of AI report, 89% of organizations believe AI and machine learning will help them grow revenue, boost operational efficiency, and improve customer experience.

Take for example, Vodafone. The telecommunications company is using a suite of Azure AI services, such as Azure OpenAI Service, to deliver real-time, hyper-personalized experiences across all of its customer touchpoints, including its digital chatbot TOBi. By leveraging AI to increase customer satisfaction, Naga Surendran, senior director of product marketing for Azure Application Services at Microsoft, says Vodafone has managed to resolve 70% of its first-stage inquiries through AI-powered digital channels. It has also boosted the productivity of support agents by providing them with access to AI capabilities that mirror those of Microsoft Copilot, an AI-powered productivity tool.

“The result is a 20-point increase in net promotor score,” he says. “These benefits are what’s driving AI infusion into every business process and application.”

Yet realizing measurable business value from AI-powered applications requires a new game plan. Legacy application architectures simply aren’t capable of meeting the high demands of AI-enhanced applications. Rather, the time is now for organizations to modernize their infrastructure, processes, and application architectures using cloud native technologies to stay competitive.

The time is now for modernization

Today’s organizations exist in an era of geopolitical shifts, growing competition, supply chain disruptions, and evolving consumer preferences. AI applications can help by supporting innovation, but only if they have the flexibility to scale when needed. Fortunately, by modernizing applications, organizations can achieve the agile development, scalability, and fast compute performance needed to support rapid innovation and accelerate the delivery of AI applications. David Harmon, director of software development for AMD says companies, “really want to make sure that they can migrate their current [environment] and take advantage of all the hardware changes as much as possible.” The result is not only a reduction in the overall development lifecycle of new applications but a speedy response to changing world circumstances.

Beyond building and deploying intelligent apps quickly, modernizing applications, data, and infrastructure can significantly improve customer experience. Consider, for example, Coles, an Australian supermarket that invested in modernization and is using data and AI to deliver dynamic e-commerce experiences to its customers both online and in-store. With Azure DevOps, Coles has shifted from monthly to weekly deployments of applications while, at the same time, reducing build times by hours. What’s more, by aggregating views of customers across multiple channels, Coles has been able to deliver more personalized customer experiences. In fact, according to a 2024 CMSWire Insights report, there is a significant rise in the use of AI across the digital customer experience toolset, with 55% of organizations now using it to some degree, and more beginning their journey.

But even the most carefully designed applications are vulnerable to cybersecurity attacks. If given the opportunity, bad actors can extract sensitive information from machine learning models or maliciously infuse AI systems with corrupt data. “AI applications are now interacting with your core organizational data,” says Surendran. “Having the right guard rails is important to make sure the data is secure and built on a platform that enables you to do that.” The good news is modern cloud based architectures can deliver robust security, data governance, and AI guardrails like content safety to protect AI applications from security threats and ensure compliance with industry standards.

The answer to AI innovation

New challenges, from demanding customers to ill-intentioned hackers, call for a new approach to modernizing applications. “You have to have the right underlying application architecture to be able to keep up with the market and bring applications faster to market,” says Surendran. “Not having that foundation can slow you down.”

Enter cloud native architecture. As organizations increasingly adopt AI to accelerate innovation and stay competitive, there is a growing urgency to rethink how applications are built and deployed in the cloud. By adopting cloud native architectures, Linux, and open source software, organizations can better facilitate AI adoption and create a flexible platform purpose built for AI and optimized for the cloud. Harmon explains that open source software creates options, “And the overall open source ecosystem just thrives on that. It allows new technologies to come into play.”

Application modernization also ensures optimal performance, scale, and security for AI applications. That’s because modernization goes beyond just lifting and shifting application workloads to cloud virtual machines. Rather, a cloud native architecture is inherently designed to provide developers with the following features:

  • The flexibility to scale to meet evolving needs
  • Better access to the data needed to drive intelligent apps
  • Access to the right tools and services to build and deploy intelligent applications easily
  • Security embedded into an application to protect sensitive data

Together, these cloud capabilities ensure organizations derive the greatest value from their AI applications. “At the end of the day, everything is about performance and security,” says Harmon. Cloud is no exception.

What’s more, Surendran notes that “when you leverage a cloud platform for modernization, organizations can gain access to AI models faster and get to market faster with building AI-powered applications. These are the factors driving the modernization journey.”

Best practices in play

For all the benefits of application modernization, there are steps organizations must take to ensure both technological and operational success. They are:

Train employees for speed. As modern infrastructure accelerates the development and deployment of AI-powered applications, developers must be prepared to work faster and smarter than ever. For this reason, Surendran warns, “Employees must be skilled in modern application development practices to support the digital business needs.” This includes developing expertise in working with loosely coupled microservices to build scalable and flexible application and AI integration.

Start with an assessment. Large enterprises are likely to have “hundreds of applications, if not thousands,” says Surendran. As a result, organizations must take the time to evaluate their application landscape before embarking on a modernization journey. “Starting with an assessment is super important,” continues Surendran. “Understanding, taking inventory of the different applications, which team is using what, and what this application is driving from a business process perspective is critical.”

Focus on quick wins. Modernization is a huge, long-term transformation in how companies build, deliver, and support applications. Most businesses are still learning and developing the right strategy to support innovation. For this reason, Surendran recommends focusing on quick wins while also working on a larger application estate transformation. “You have to show a return on investment for your organization and business leaders,” he says. For example, modernize some apps quickly with re-platforming and then infuse them with AI capabilities.

Partner up. “Modernization can be daunting,” says Surendran. Selecting the right strategy, process, and platform to support innovation is only the first step. Organizations must also “bring on the right set of partners to help them go through change management and the execution of this complex project.”

Address all layers of security. Organizations must be unrelenting when it comes to protecting their data. According to Surendran, this means adopting a multi-layer approach to security that includes: security by design, in which products and services are developed from the get-go with security in mind; security by default, in which protections exist at every layer and interaction where data exists; and security by ongoing operations, which means using the right tools and dashboards to govern applications throughout their lifecycle.

A look to the future

Most organizations are already aware of the need for application modernization. But with the arrival of AI comes the startling revelation that modernization efforts must be done right, and that AI applications must be built and deployed for greater business impact. Adopting a cloud native architecture can help by serving as a platform for enhanced performance, scalability, security, and ongoing innovation. “As soon as you modernize your infrastructure with a cloud platform, you have access to these rapid innovations in AI models,” says Surendran. “It’s about being able to continuously innovate with AI.”

Read more about how to accelerate app and data estate readiness for AI innovation with Microsoft Azure and AMD. Explore Linux on Azure.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Read more

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Digital twins of human organs are here. They’re set to transform medical treatment.

Steven Niederer, a biomedical engineer at the Alan Turing Institute and Imperial College London, has a cardboard box filled with 3D-printed hearts. Each of them is modeled on the real heart of a person with heart failure, but Niederer is more interested in creating detailed replicas of people’s hearts using computers. 

These “digital twins” are the same size and shape as the real thing. They work in the same way. But they exist only virtually. Scientists can do virtual surgery on these virtual hearts, figuring out the best course of action for a patient’s condition.

After decades of research, models like these are now entering clinical trials and starting to be used for patient care. The eventual goal is to create digital versions of our bodies—computer copies that could help researchers and doctors figure out our risk of developing various diseases and determine which treatments might work best.

But the budding technology will need to be developed very carefully. Read the full story to learn why.

—Jessica Hamzelou

This story is from the forthcoming magazine edition of MIT Technology Review, set to go live on January 6—it’s all about the exciting breakthroughs happening in the world right now. If you don’t already, subscribe to receive future copies.

This is where the data to build AI comes from

AI is all about data. Reams and reams of data are needed to train algorithms to do what we want, and what goes into the AI models determines what comes out. But here’s the problem: AI developers and researchers don’t really know much about the sources of the data they are using.

The Data Provenance Initiative, a group of over 50 researchers from both academia and industry, wanted to fix that. They wanted to know, very simply: Where does the data to build AI come from?

Their findings, shared exclusively with MIT Technology Review, show a worrying trend: AI’s data practices risk concentrating power overwhelmingly in the hands of a few dominant technology companies. Read the full story.

—Melissa Heikkilä

Three pieces of good news on climate change in 2024

The vibes in the climate world this year have largely been … less than great.

Global greenhouse-gas emissions hit a new high, and this year is also on track to be the warmest on record. Global climate talks fell flat, and disasters from wildfires to hurricanes are being made worse by climate change.

But among all that (very real) negative news, there was some good, too: We saw progress cutting back on the most polluting fossil fuels, cheaper and better technologies for combating climate change, and a continuous global effort to address the problem. So as we near the end of 2024, let’s take a moment to look back on some of the bright spots.

—Casey Crownhart

This story is from The Spark, our weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The US Supreme Court will hear TikTok’s appeal against its ban 
It’s agreed to hear the company’s arguments on January 10. (FT $)
+ A ruling could follow shortly afterwards. (WP $)
+ Here’s how a couple of the most likely scenarios could play out. (The Information $)

2 Amazon’s telehealth clinic is being sued
Philip Tong died shortly after a virtual appointment last year. His family wants answers. (WP $)
+ The legal case accuses the health provider of negligently failing to care for Tong. (LA Times $)

3 The Boeing Starliner astronauts are still stuck in space
Their return to Earth has been pushed back yet again, this time to March 2025. (WP $)
+ They’ve been living on the ISS since June. (The Guardian

4 Dangerous disordered eating content is rife on X

The platform’s content moderation has become so lax, harmful communities are thriving unchecked. (The Atlantic $)

5 People are shining lasers at planes flying over New York
Amid the local drone panic, pilots are struggling with the unwelcome intrusions. (404 Media)
+ Don’t be surprised if other similar drone panics crop up in the future. (Vox)

6 How Google Street View helped to solve a missing-person case
After its cars captured a man hunched over a large white bag in a car trunk. (NYT $)
+ Google Maps is still the biggest, but these startups are fast gaining traction. (Fast Company $)

7 Why you shouldn’t remove fluoride from your drinking water
Unless you desperately want to jeopardize your dental health. (WSJ $)
+ It’s not the first time concerns around fluoride have surfaced. (NYT $)

8 The old internet is slowly disappearing
What does that mean for our collective cultural understanding? (The Verge)
+ How to fix the internet. (MIT Technology Review

9 Europeans just love balcony solar panels
They’re simple to install and can help to keep electricity bills down. (The Guardian)
+ How to store energy for leaner times. (Knowable Magazine)
+ Advanced solar panels still need to pass the test of time. (MIT Technology Review)

10 You can now call ChatGPT on the phone 📞
There’s nowhere left to hide. (Bloomberg $)

Quote of the day

“I don’t think that work is suitable for human beings.” 

—James Irungu, a former Facebook content moderator, reflects on the horrific material he encountered in the job, the Guardian reports.

The big story

Future space food could be made from astronaut breath

May 2023

The future of space food could be as simple—and weird—as a protein shake made with astronaut breath or a burger made from fungus.

For decades, astronauts have relied mostly on pre-packaged food during their forays off our planet. With missions beyond Earth orbit in sight, a NASA-led competition is hoping to change all that and usher in a new era of sustainable space food.

To solve the problem of feeding astronauts on long-duration missions, NASA asked companies to propose novel ways to develop sustainable foods for future missions. Around 200 rose to the challenge—creating nutritious (and outlandish) culinary creations in the process. Read the full story.

—Jonathan O’Callaghan

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ These optical illusion bird sculptures are a sight to be seen. 🦜
+ Don’t blame me if you end up wanting to eat this Bûche de Noël in one sitting.
+ Casio watches are 50 years old—and cooler than ever.
+ Do you fly naked? (No, not like that..)

Read more

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The vibes in the climate world this year have largely been … less than great.

Global greenhouse-gas emissions hit a new high, reaching 37.4 billion metric tons in 2024. This year is also on track to be the warmest on record, with temperatures through September hitting 1.54 °C (2.77 °F) above preindustrial levels. Global climate talks fell flat, and disasters from wildfires to hurricanes are being made worse by climate change.

But among all that (very real) negative news, there was some good, too: We saw progress cutting back on the most polluting fossil fuels, cheaper and better technologies for combating climate change, and a continuous global effort to address the problem. As we near the end of 2024, let’s take a moment to look back on some of the bright spots.

We’re kicking coal to the curb

One of my favorite climate moments from this year happened in the UK. The country has historically relied heavily on coal as an electricity source—as of 1990, coal met about 65% of its electricity demand. But on September 30, 2024, the last coal plant in the nation shut down.

Renewables are stepping in to fill the gap. Wind farms in the UK are on track to produce more electricity this year than coal and gas plants together.

The moment was a symbolic one, and it also reflects the very real progress that’s happening around the world in inching away from this polluting fossil fuel. In the US, coal made up around 50% of the electricity supply four decades ago. In 2023, that share was roughly 16%.

We should see coal use plateau and potentially begin to fall by the end of the decade, according to the International Energy Agency. Progress needs to happen faster, though, and it needs to happen in countries like China, where energy demand is increasing. There’s also growing concern about what increasing energy demand from data centers, including those used to power AI, will mean for efforts to shut off old coal plants.

Batteries just keep getting cheaper

Lithium-ion battery packs are cheaper than ever in 2024, with prices dropping 20% this year to $115 per kilowatt-hour, according to data from BloombergNEF. That’s the biggest drop since 2017.

Batteries are a central technology for addressing climate change. They power the electric vehicles we’re relying on to help clean up the transportation sector and play an increasingly important role for the grid, since they can store energy from inconsistently available renewables like wind and solar.

Since EVs are still more expensive upfront than their gas-powered counterparts in most of the world, cheaper batteries are great news for efforts to get more people to take the leap to electric. And it’s hard to overstate how quickly battery prices have plummeted. Batteries were twice as expensive in 2017 as they are today. Just 10 years ago, prices were six times what they are in 2024. 

To be fair, there’s been mixed news in the EV world this year—a slowdown in demand growth for EVs is actually one of the factors helping battery prices hit record lows. EV sales are still growing around the world, but at a slower pace than they were in 2023. China is the biggest EV market in the world by far, making up three-quarters of global registrations in 2024 as of October. 

Climate tech is still busy and bustling

Looking back at the energy and climate stories we published this year, I can’t help but feel at least a little bit optimistic about what’s coming next. 

Some groups are looking to the natural world to address the climate crisis; this year, I covered a company working to grow microbes in massive bioreactors to help supplement our food sources, as well as researchers who are looking to plants to help mine the metals we need to fight climate change. Others hope to tweak biology—my colleague James Temple spoke with Jennifer Doudna about the potential for CRISPR, the gene-editing technology she pioneered.

Companies are deploying air-conditioning systems that can act like batteries, storing up energy for when it’s needed. The US Department of Energy is investing in projects that aim to concentrate heat from the sun and use it to power the grid or industrial processes. I spoke to a startup looking to make hydropower technology that’s safer for fish, and another building magnets using cheap, widely available materials.

And in October we published our 2024 list of 15 Climate Tech Companies to Watch, which featured everything from a startup using AI to detect wildfires to a company giving supplements to cattle to help cut emissions from their burps.  

Climate change represents a massive challenge for the world, and we’re entering an especially uncertain time. We’ll be covering it all, the good and the bad. Thanks for being here this year, and I’m looking forward to bringing you all the climate tech news you need in 2025.


Now read the rest of The Spark

Related reading

If you need a dash of innovation and positivity in your life, might I recommend taking a gander at our list of 15 Climate Tech Companies to Watch

What’s more inspiring than young people working on the world’s most important problems? Our 2024 class of 35 Innovators Under 35 is sure to spread some cheer. 

If you’re needing even more innovation, why not look back at our 10 Breakthrough Technologies? Exascale computers certainly help me put things in perspective. And get excited, because our 2025 list is coming very, very soon. 

a humanoid robot sawing the branch that it is sitting on

NICO ORTEGA

Another thing 

This year was filled with some exciting moments in technology, but there were also some failures. Here are a few of the worst technology flops of 2024. Check it out to see why voluntary carbon markets made the list and learn all about AI slop. 

And one more 

You’ve almost certainly heard that energy demand from AI is huge, and only expected to explode in the coming years. A new preprint study aimed to quantify just how bad things are, and the researchers found that data centers accounted for over 4% of electricity consumption in the US between September 2023 and August 2024. And the carbon intensity of the power that’s used is nearly 50% higher than the national average. 

Get all the details in the latest story from my colleague James O’Donnell.

Keeping up with climate  

Geothermal energy provides about 1% of global electricity today, but If things go well, the tech could meet up to 15% of global power demand growth through 2050. (Axios)

Renting an EV over the holidays? This is a great guide for first-time EV drivers, including helpful tips about how to handle charging. (Bloomberg)

Commonwealth Fusion Systems chose Virginia as the site for its first commercial fusion power plant. The company says the 400-megawatt plant will come online in the early 2030s. (Heatmap)
→ I recently visited Commonwealth’s first demonstration site in Massachusetts. It’s basically still a hole in the ground. (MIT Technology Review)

The US Department of Energy’s Loan Programs Office just committed $15 billion to a California utility. It’s the largest-ever commitment from the office. (New York Times)

The US EPA will grant California the right to ban gas-powered cars by 2035. The agency has to give the state a waiver to set its own rules. (Washington Post
→ We can expect a legal battle, though. The incoming Trump administration is recommending major changes to cut off support for EVs and charging. (Reuters)

China dominates the world of lithium-ion batteries. Some startups in the US and Europe argue that rather than playing catch-up, the rest of the world should focus on alternative chemistries like lithium-sulfur and sodium-ion batteries. (Canary Media)

Read more

A healthy heart beats at a steady rate, between 60 and 100 times a minute. That’s not the case for all of us, I’m reminded, as I look inside a cardboard box containing around 20 plastic hearts—each a replica of a real human one.

The hearts, which previously sat on a shelf in a lab in West London, were generated from MRI and CT scans of people being treated for heart conditions at Hammersmith Hospital next door. Steven Niederer, a biomedical engineer at the Alan Turing Institute and Imperial College London, created them on a 3D printer in his office.

One of the hearts, printed in red recycled plastic, looks as I imagine a heart to look. It just about fits in my hand, and the chambers have the same dimensions as the ones you might see in a textbook. Perhaps it helps that it’s red.

The others look enormous to me. One in particular, printed in black plastic, seems more than twice the size of the red one. As I find out later, the person who had the heart it was modeled on suffered from heart failure.

The plastic organs are just for educational purposes. Niederer is more interested in creating detailed replicas of people’s hearts using computers. These “digital twins” are the same size and shape as the real thing. They work in the same way. But they exist only virtually. Scientists can do virtual surgery on these virtual hearts, figuring out the best course of action for a patient’s condition.

After decades of research, models like these are now entering clinical trials and starting to be used for patient care. Virtual replicas of many other organs are also being developed. Engineers are working on digital twins of people’s brains, guts, livers, nervous systems, and more. They’re creating virtual replicas of people’s faces, which could be used to try out surgeries or analyze facial features, and testing drugs on digital cancers. The eventual goal is to create digital versions of our bodies—computer copies that could help researchers and doctors figure out our risk of developing various diseases and determine which treatments might work best. They’d be our own personal guinea pigs for testing out medicines before we subject our real bodies to them.

To engineers like Niederer, it’s a tantalizing prospect very much within reach. Several pilot studies have been completed, and larger trials are underway. Those in the field expect digital twins based on organs to become a part of clinical care within the next five to 10 years, aiding diagnosis and surgical decision-making. Further down the line, we’ll even be able to run clinical trials on synthetic patients—virtual bodies created using real data.

But the budding technology will need to be developed carefully. Some worry about who will own this highly personalized data and how it could be used. Others fear for patient autonomy—with an uncomplicated virtual record to consult, will doctors eventually bypass the patients themselves? And some simply feel a visceral repulsion at the idea of attempts to re-create humans in silico. “People will say ‘I don’t want you copying me,’” says Wahbi El-Bouri, who is working on digital-twin technologies. “They feel it’s a part of them that you’ve taken.” 

Getting digital

Digital twins are well established in other realms of engineering; for example, they have long been used to model machinery and infrastructure. The term may have become a marketing buzzword lately, but for those working on health applications, it means something very specific. 

We can think of a digital twin as having three separate components, says El-Bouri, a biomedical engineer at the University of Liverpool in the UK. The first is the thing being modeled. That might be a jet engine or a bridge, or it could be a person’s heart. Essentially, it’s what we want to test or study.

The second component is the digital replica of that object, which can be created by taking lots of measurements from the real thing and entering them into a computer. For a heart, that might mean blood pressure recordings as well as MRI and CT scans. The third is new data that’s fed into the model. A true digital twin should be updated in real time—for example, with information collected from wearable sensors, if it’s a model of someone’s heart.

Taking measurements of airplanes and bridges is one thing. It’s much harder to get a continuous data feed from a person, especially when you need details about the inner functions of the heart or brain.

And the information transfer should run both ways. Just as sensors can deliver data from a person’s heart, the computer can model potential outcomes to make predictions and feed them back to a patient or health-care provider. A medical team might want to predict how a person will respond to a drug, for example, or test various surgical procedures on a digital model before operating in real life.

By this definition, pretty much any smart device that tracks some aspect of your health could be considered a kind of rudimentary digital twin. “You could say that an Apple Watch fulfills the definition of a digital twin in an unexciting way,” says Niederer. “It tells you if you’re in atrial fibrillation or not.” 

But the kind of digital twin that researchers like Niederer are working on is far more intricate and detailed. It could provide specific guidance on which disease risks a person faces, what medicines might be most effective, or how any surgeries should proceed.

We’re not quite there yet. Taking measurements of airplanes and bridges is one thing. It’s much harder to get a continuous data feed from a person, especially when you need details about the inner functions of the heart or brain, says Niederer. As things stand, engineers are technically creating “patient-specific models” based on previously collected hospital and research data, which is not continually updated. 

The most advanced medical digital twins are those built to match human hearts. These were the first to be attempted, partly because the heart is essentially a pump—a device familiar to engineers­—and partly because heart disease is responsible for so much ill health and death, says El-Bouri. Now, advances in imaging technology and computer processing power are enabling researchers to mimic the organ with the level of fidelity that clinical applications require. 

Building a heart

The first step to building a digital heart is to collect images of the real thing. Each team will have its own slightly different approach, but generally, they all start with MRI and CT scans of a person’s heart. These can be entered into computer software to create a 3D movie. Some scans will also highlight any areas of damaged tissue, which might disrupt the way the electrical pulses that control heart muscle contraction travel through the organ.

The next step is to break this 3D model down into tiny chunks. Engineers use the term “computational mesh” to describe the result; it can look like an image of the heart made up of thousands of 3D pieces. Each segment represents a small collection of cells and can be assigned properties based on how well they are expected to propagate an electrical impulse. “It’s all equations,” says Natalia Trayanova, a biomedical engineering professor based at Johns Hopkins University in Baltimore, Maryland.

This computer model
of the human heart show how electrical signals pass through heart tissue. The model was created by Marina Strocchi, who works with Steven Niederer at Imperial College London.
COURTESY OF MARINA STROCCHI

As things stand, these properties involve some approximation. Engineers will guess how well each bit of heart works by extrapolating from previous studies of human hearts or past research on the disease the person has. The end result is a beating, pumping model of a real heart. “When we have that model, you can poke it and prod it and see under what circumstances stuff will happen,” says Trayanova.

Her digital twins are already being trialed to help people with atrial fibrillation, a fairly common condition that can trigger an irregular heartbeat—too fast or all over the place. One treatment option is to burn off the bits of heart tissue responsible for the disrupted rhythm. It’s usually left to a surgical team to figure out which bits to target.

For Trayanova, the pokes and prods are designed to help surgeons with that decision. Scans might highlight a few regions of damaged or scarred tissue. Her team can then construct a digital twin to help locate the underlying source of the damage. In total, the tool will likely suggest two or three regions to destroy—though in rare instances, it has shown many more, says Trayanova: “They just have to trust us.” So far, 59 people have been through the trial. More are planned. 

In cases like these, the models don’t always need to be continually updated, Trayanova says. A heart surgeon might need to run simulations only to know where to implant a device, for example. Once that operation is over, no more data might be needed, she says.

Quasi patients

At his lab on the campus of Hammersmith Hospital in London, Niederer has also been building virtual hearts. He is exploring whether his models could be used to find the best place to implant pacemakers. His approach is similar to Trayanova’s, but his models also incorporate ECG data from patients. These recordings give a sense of how electrical pulses pass through the heart tissue, he says.

So far, Niederer and his colleagues have published a small trial in which models of 10 patients’ hearts were evaluated by doctors but not used to inform surgical decisions. Still, Niederer is already getting requests from device manufacturers to run virtual tests of their products. A couple have asked him to choose places where their battery-operated pacemaker devices can sit without bumping into heart tissue, he says. Not only can Niederer and his colleagues run this test virtually, but they can do it for hearts of various different sizes. The team can test the device in hundreds of potential locations, within hundreds of different virtual hearts. “And we can do it in a week,” he adds.

This is an example of what scientists call “in silico trials”—clinical trials run on a computer. In some cases, it’s not just the trials that are digital. The volunteers are, too.

El-Bouri and his colleagues are working on ways to create “synthetic” participants for their clinical trials. The team starts with data collected from real people and uses this to create all-new digital organs with a mishmash of characteristics from the real volunteers. 

These in silico trials could be especially useful for helping us figure out the best treatments for pregnant people—a group that is notoriously excluded from many clinical trials.

Specifically, one of El-Bouri’s interests is stroke, a medical emergency in which clots or bleeds prevent blood flow in parts of the brain. For their research, he and his colleagues model the brain, along with the blood vessels that feed it. “You could create lots and lots of different shapes and sizes of these brains based on patient data,” says El-Bouri. Once he and his team create a group of synthetic patient brains, they can test how these clots might change the flow of blood or oxygen, or how and where brain tissue is affected. They can test the impact of certain drugs, or see what might happen if a stent is used to remove the blockage.

For another project, El-Bouri is creating synthetic retinas. From a starting point of 100 or so retinal scans from real people, his team can generate 200 or more synthetic eyes, “just like that,” he says. The trick is to figure out the math behind the distribution of blood vessels and re-create it through a set of algorithms. Now he is hoping to use those synthetic eyes in drug trials—among other things, to find the best treatment doses for people with age-related macular degeneration, a common condition that can lead to blindness.

 These in silico trials could be especially useful for helping us figure out the best treatments for pregnant people—a group that is notoriously excluded from many clinical trials. That’s for fear that an experimental treatment might harm a fetus, says Michelle Oyen, a professor of biomedical engineering at Wayne State University in Detroit.

Oyen is creating digital twins of pregnancy. It’s a challenge to get the information needed to feed the models; during pregnancy, people are generally advised to avoid scans or invasive investigations they don’t need. “We’re much more limited in terms of the data that we can get,” she says. Her team does make use of ultrasound images, including a form of ultrasound that allows the team to measure blood flow. From those images, they can see how blood flow in the uterus and the placenta, the organ that supports a fetus, might be linked to the fetus’s growth and development, for example.

For now, Oyen and her colleagues aren’t creating models of the fetuses themselves—they’re focusing on the fetal environment, which includes the placenta and uterus. A baby needs a healthy, functioning placenta in order to survive; if the organ starts to fail, stillbirth can be the tragic outcome. 

Oyen is working on ways to monitor the placenta in real time during pregnancy. These readings could be fed back to a digital twin. If she can find a way to tell when the placenta is failing, doctors might be able to intervene to save the baby, she says. “I think this is a game changer for pregnancy research,” she adds, “because this basically gives us ways of doing research in pregnancy that [carries a minimal] risk of harm to the fetus or of harm to the mother.”

In another project, the team is looking at the impact of cesarean section scars on pregnancies. When a baby is delivered by C-section, surgeons cut through multiple layers of tissue in the abdomen, including the uterus. Scars that don’t heal well become weak spots in the uterus, potentially causing problems for future pregnancies. By modeling these scars in digital twins, Oyen hopes to be able to simulate how future pregnancies might pan out, and determine if or when specialist care might be called for.

Eventually, Oyen wants to create a full virtual replica of the pregnant uterus, fetus and all. “But we’re not there yet—we’re decades behind the cardiovascular people,” she says. “That’s pregnancy research in a nutshell,” she adds. “We’re always decades behind.”

Twinning

It’s all very well to generate virtual body parts, but the human body functions as a whole. That’s why the grand plan for digital twins involves replicas of entire people. “Long term, the whole body would be fantastic,” says El-Bouri.

It may not be all that far off, either. Various research teams are already building models of the heart, brain, lungs, kidneys, liver, musculoskeletal system, blood vessels, immune system, eye, ear, and more. “If we were to take every research group that works on digital twins across the world at the moment, I think you could put [a body] together,” says El-Bouri. “I think there’s even someone working on the tongue,” he adds. 

The challenge is bringing together all the various researchers, with the different approaches and different code involved in creating and using their models, says El-Bouri. “Everything exists,” he says. “It’s just putting it together that’s going to be the issue.”

In theory, such whole-body twins could revolutionize health care. Trayanova envisions a future in which a digital twin is just another part of a person’s medical record—one that a doctor can use to decide on a course of treatment. 

“Technically, if someone tried really hard, they might be able to piece back who someone is through scans and twins of organs.”

Wahbi El-Bouri

But El-Bouri says he receives mixed reactions to the idea. Some people think it’s “really exciting and really cool,” he says. But he’s also met people who are strongly opposed to the idea of having a virtual copy of themselves exist on a computer somewhere: “They don’t want any part of that.” Researchers need to make more of an effort to engage with the public to find out how people feel about the technology, he says.

There are also concerns over patient autonomy. If a doctor has access to a patient’s digital twin and can use it to guide decisions about medical care, where does the patient’s own input come into the equation? Some of those working to create digital twins point out that the models could reveal whether patients have taken their daily meds or what they’ve eaten that week. Will clinicians eventually come to see digital twins as a more reliable source of information than people’s self-reporting?

Doctors should not be allowed to bypass patients and just “ask the machine,” says Matthias Braun, a social ethicist at the University of Bonn in Germany. “There would be no informed consent, which would infringe on autonomy and maybe cause harm,” he says. After all, we are not machines with broken parts. Two individuals with the same diagnosis can have very different experiences and lead very different lives. 

However, there are cases in which patients are not able to make decisions about their own treatment—for example, if they are unconscious. In those cases, clinicians try to find a proxy—someone authorized to make decisions on the patient’s behalf. A digital psychological twin, trained on a person’s medical data and digital footprint, could potentially act as a better surrogate than, for example, a relative who doesn’t know the person’s preferences, he says.

If using digital twins in patient care is problematic, in silico trials can also raise issues. Jantina de Vries, an ethicist at the University of Cape Town, points out that the data used to create digital twins and synthetic “quasi patients” will come from people who can be scanned, measured, and monitored. This group is unlikely to include many of those living on the African continent, who won’t have ready access to those technologies. “The problem of data scarcity directly translates into technologies that … are not geared to think about diverse bodies,” she says.

De Vries thinks the data should belong to the public in order to ensure that as many people benefit from digital-twin technologies as possible. Every record should be anonymized and kept within a public database that researchers around the world can access and make use of, she says. 

The people who participate in Trayanova’s trials “explicitly give me consent to know their data, and to know who they are … [everything] about them,” she says. 

The people taking part in Niederer’s research also provide consent for their data to be used by the medical and research teams. But while clinicians have access to all medical data, researchers access only anonymized or pseudonymized data, Niederer says. 

In some cases, researchers will also ask participants to consent to sharing their fully anonymized data in public repositories. This is the only data that companies are able to access, he adds: “We do not share [our] data sets outside of the research or medical teams, and we do not share them with companies.” 

El-Bouri thinks that patients should receive some form of compensation in exchange for sharing their health data. Perhaps they should get preferential access to medications and devices based on that data, he suggests. At any rate, “[full] anonymization is tricky, particularly if you’re taking patient scans to develop twins,” he says. “Technically, if someone tried really hard, they might be able to piece back who someone is through scans and twins of organs.”

When I looked at those anonymous plastic hearts, stored in a cardboard box tucked away on a shelf in the corner of an office, they felt completely divorced from the people whose real, beating hearts they were modeled on. But digital twins seem different somehow. They’re animated replicas, digital copies that certainly appear to have some sort of life.

“People often think, Oh, this is just a simulation,” says El-Bouri. “But it’s a digital representation of an individual.” 

Read more
1 2 3 4 2,503