EU Study Suggests Strict Liability For High-Risk AI Systems
MediaNama’s Take:
While the European Union (EU) Parliament’s study points out the benefits of having a single point of liability within the AI chain, one has to wonder whether this runs the risk of oversimplifying risk attribution. AI models have multiple moving parts that can affect their performance. They involve numerous stakeholders across the value chain – from data providers and model developers to cloud infrastructure providers, integrators, and end-users. Each party contributes different elements that could potentially lead to harmful outcomes: biased training data, flawed algorithms, inadequate security measures, improper implementation, or misuse.
By consolidating all liability onto a single operator, the framework may create perverse incentives where upstream parties become less careful about their contributions, knowing they won’t bear direct liability. While the operator may have the ability to sue other players in the ecosystem, this could have its own fair share of problems. These problems are similar to what the end user faces in a fault-based liability system. For instance, the operator would have to demonstrate upstream negligence in these lawsuits. This could create a paradoxical situation where the strict liability regime, designed to eliminate the complexities of proving fault for victims, simply shifts these same evidentiary burdens to operators in their subsequent claims against upstream parties.
What’s the news:
High-risk AI systems require a standalone strict liability regime, the EU Parliament’s recent study on AI and civil liability suggests. This study, published on July 24, advocates for a liability framework that holds a single party responsible for liability originating from an AI model: the operator that controls the AI model and benefits economically from its use. Strict liability, as per tort law, is liability for harm that one may have caused to another person, irrespective of intent or fault.
“This ‘one stop shop’ model would eliminate the problems of causal uncertainty and overlapping liabilities, allowing risks to be internalised and costs to be managed through insurance and price mechanisms,” the study says, explaining its strict liability approach. It argues that a strict liability regime would also eliminate the need for governments to impose procedural requirements on AI companies, such as disclosures. Instead, there would be clear substantive obligations that would improve predictability for both the AI operators and the victims.
Key considerations for AI liability rules:
The EU Parliament’s study argues that an effective approach to AI liability considers three main factors:
- Regulatory clarity, to ensure ex ante (in advance) predictability for economic operators. This better equips them to price their products, factoring in insurance coverage and liability costs into their plans and eventually distributing these costs down to their pool of users.
- Effective victim compensation through simplified liability mechanisms
- Substantive harmonisation to prevent regulatory fragmentation within the EU
Besides these key considerations, the study also factors in the pervasive nature of AI, emphasising that if the EU applies identical rules to AI systems across different domains, it will have a horizontal effect on domains that the region keeps very distinct today. It also notes that AI systems cannot be treated uniformly simply because all AI models do not share the same traits.
“AI-based applications range between diagnostic tools, to high-frequency trading algorithms, to social and bio-inspired robotics, autonomous vehicles, and software agents. Any attempt to identify a common minimum denominator from a technical standpoint is doomed to fail,” the study notes. Just like the systems are diverse, their risk profiles are also different.
To account for this, the study suggests focusing on the regulation of civil liability for high-risk AI models. It notes that the EU AI Act has set up the dependence on this high-risk categorisation, and that one could argue that the models it classifies as high risk are distinct from one another, with no criteria in place to measure their respective impact on users. The study advises that the EU should come up with a clear mechanism that would assess and certify the correctness of an AI model producer or provider’s conclusion about whether their model is high-risk or not.
Who should be liable for AI risks?
An ideal regulatory framework for high-risk AI systems must have a single entry point for litigation, the study suggests. It says that instead of asking who was at fault, such a framework should base liability on who, above all other parties, was best positioned to minimise risks and manage costs associated with the harms that nonetheless materialise. Each AI system should allow the EU member states to identify a single operator, just as there is always a single producer of products, the study argues.
Advertisements
“The fundamental idea is that the last party entering into contact with the victim, who controlled the AIS [AI system] to provide a product or service through it, should be held to compensate for all damage suffered,” the study argues.
It emphasises that this definition could include both the AI service provider and deployer within it; however, if someone suffers damage, it would be simple enough to determine whether they interacted with the provider or the deployer and, in turn, who should compensate them for damages.
“This would not depend on disentangling a complex causal nexus, but on whether the party was using a product or service offered under a particular name or trademark or, instead, whether the AIS was being used in a professional context by another entity when the damage occurred,” it explains. The only exceptions to holding the operator responsible for damages that the study supports are cases of force majeure or if the user caused the harm because of their reckless behaviour.
Key aspects of AI liability rules:
The study points out the following for the EU Parliament’s consideration:
- Once a user sues the operator, the operator should, in turn, have the right to sue other parties along the value chain.
- The EU member states should not set limits on the amount a user can claim in damages.
- If someone buys an AI product/service that causes harm, they shouldn’t need separate legal actions to recover both the harm caused and the cost of the defective AI system itself.
What does India think about AI liability?
India currently does not have any specific AI regulation, making it harder to delineate a clear regulatory stance on civil liabilities. However, based on the 2018 National Strategy for AI Intelligence, the country does not want to focus on figuring out who is liable; instead, it wants the focus to be on identifying the components within an AI model that failed and figuring out ways to prevent such failures in the future. The strategy proposed a framework with the following key points:
- A negligence standard for liability as opposed to strict liability. This means that, unlike what the EU’s study proposes, companies would only be liable if they were careless. This would encourage self-regulation through damage assessments during AI development.
- Formulating safe harbours to limit liability for AI companies. This means that a company would have limited liability if it follows proper AI development practices ( in terms of design, testing, monitoring, and improvement).
- Instead of making any one party pay all damages, the strategy splits liability based on each party’s actual contribution to the harm, especially in relation to AI misuse.
- Lawsuits would require real damages, not just fear of potential future harm.
Also read:
Support our journalism:
Post Comment