The EU is creating new rules to make it easier to sue AI companies for harm. A bill unveiled this week, which is likely to become law in a couple of years, is part of Europe’s push to prevent AI developers from releasing dangerous systems. And while tech companies complain it could have a chilling effect on innovation, consumer activists say it doesn’t go far enough.
Powerful AI technologies are increasingly shaping our lives, relationships, and societies, and their harms are well documented. Social media algorithms boost misinformation, facial recognition systems are often highly discriminatory, and predictive AI systems that are used to approve or reject loans can be less accurate for minorities.
The new bill, called the AI Liability Directive, will add teeth to the EU’s AI Act, which is set to become EU law around the same time. The AI Act would require extra checks for “high risk” uses of AI that have the most potential to harm people, including systems for policing, recruitment, or health care.
The new liability bill would give people and companies the right to sue for damages after being harmed by an AI system. The goal is to hold developers, producers, and users of the technologies accountable, and require them to explain how their AI systems were built and trained. Tech companies that fail to follow the rules risk EU-wide class actions.
For example, job seekers who can prove that an AI system for screening résumés discriminated against them can ask a court to force the AI company to grant them access to information about the system so they can identify those responsible and find out what went wrong. Armed with this information, they can sue.
The proposal still needs to snake its way through the EU’s legislative process, which will take a couple of years at least. It will be amended by members of the European Parliament and EU governments and will likely face intense lobbying from tech companies, which claim that such rules could have a “chilling” effect on innovation.
In particular, the bill could have an adverse impact on software development, says Mathilde Adjutor, Europe’s policy manager for the tech lobbying group CCIA, which represents companies including Google, Amazon, and Uber.
Under the new rules, “developers not only risk becoming liable for software bugs, but also for software’s potential impact on the mental health of users,” she says.
Imogen Parker, associate director of policy at the Ada Lovelace Institute, an AI research institute, says the bill will shift power away from companies and back toward consumers—a correction she sees as particularly important given AI’s potential to discriminate. And the bill will ensure that when an AI system does cause harm, there’s a common way to seek compensation across the EU, says Thomas Boué, head of European policy for tech lobby BSA, whose members include Microsoft and IBM.
However, some consumer rights organizations and activists say the proposals don’t go far enough and will set the bar too high for consumers who want to bring claims.
Ursula Pachl, deputy director general of the European Consumer Organization, says the proposal is a “real letdown,” because it puts the responsibility on consumers to prove that an AI system harmed them or an AI developer was negligent.
“In a world of highly complex and obscure ‘black box’ AI systems, it will be practically impossible for the consumer to use the new rules,” Pachl says. For example, she says, it will be extremely difficult to prove that racial discrimination against someone was due to the way a credit scoring system was set up.
The bill also fails to take into account indirect harms caused by AI systems, says Claudia Prettner, EU representative at the Future of Life Institute, a nonprofit that focuses on existential AI risk. A better version would hold companies responsible when their actions cause harm without necessarily requiring fault, like the rules that already exist for cars or animals, Prettner adds.
“AI systems are often built for a given purpose but then lead to unexpected harms in another area. Social media algorithms, for example, were built to maximize time spent on platforms but inadvertently boosted polarizing content,” she says.
The EU wants its AI Act to be the global gold standard for AI regulation. Other countries such as the US, where some efforts are underway to regulate the technology, are watching closely. The Federal Trade Commission is considering rules around how companies handle data and build algorithms, and it has compelled companies that have collected data illegally to delete their algorithms. Earlier this year, the agency forced diet company Weight Watchers to do so after it collected data on children illegally.
Whether or not it succeeds, this new EU legislation will have a ripple effect on how AI is regulated around the world. “It is in the interest of citizens, businesses, and regulators that the EU gets liability for AI right. It cannot make AI work for people and society without it,” says Parker.