AI liability in the EU

Author info

Numerous discussions have focused on the European Union's AI Act, yet it lacks provisions for individual citizens to claim compensation for damages caused by AI systems.The European Commission deliberately chose to address AI liability through the lens of product liability. In this blog, we explore the implications of the new Product Liability Directive and the AI Liability Directive, and their connections to the AI Act, for understanding AI liability in the EU. 

Overview of the new Product Liability Directive and its impact on AI

A few days after the political agreement on the much-discussed AI Act (see our blogpost), there was another political agreement on a key piece of AI legislation, admittedly less in the spotlight, but also very important. The Spanish Presidency and the European Parliament have reached on the 14th of December 2023 a political agreement on a new directive which will update the existing Product Liability Directive for defective products to better reflect the fact that many products now have digital features and that the economy is becoming increasingly circular.

  1. First of all, the new Product Liability Directive substantially modifies the current product liability regime by broadening the scope to include all AI systems and AI-enabled goods. Open-source software is excluded from the scope in order to avoid burdening research and innovation. The Directive will clearly categorise AI systems as “products”. 
  2. Secondly, there will be a modification to the definition of a "product defect" to incorporate characteristics specific to AI, particularly focusing on the system's capacity for continuous learning after rollout. By broadening the scope of liability to encompass such instances, this amendment recognises the dynamic nature of AI systems and aims to guarantee that accountability extends to challenges arising from the ongoing learning capabilities of the system.
  3. Thirdly, this Directive will include a reversal of the burden of proof. The burden of proof will shift to the AI operator to prove that the AI system was not at fault, once the user has made a reasonable argument that the harm is related to the actions of the AI system. In cases involving great technical or scientific complexity, the courts may require the user to prove only the likelihood that a product was defective or that its defect was a likely cause of the damage.

AI Liability Directive

In September 2022, the European Commission proposed the AI Liability Directive, providing a legal framework for individuals adversely affected by AI system outputs to pursue legal action against the AI operators. This directive introduces new procedures for fault-based liability for damage caused by AI systems.

  1. Like the Product Liability Directive, the AI Liability Directive recognises the opacity of AI systems and the information imbalance between developers and users or consumers. Both directives shift the burden of proof to developers by introducing disclosure mechanisms and rebuttable presumptions. Users only need to provide plausible evidence of potential harm, while AI operators are required to disclose all relevant information to avoid liability. Failure to disclose such information will lead to a (rebuttable) presumption that the AI operator has breached its duty of care. This presumption can be rebutted if the AI operator can prove that the duty of care was fulfilled.
  2. In addition, the AI Liability Directive includes claims against non-professional users of AI systems for causing harm to others and it acknowledges human rights violations as eligible damages.

Currently, the AI Liability Directive is  going through the EU's legislative process.

AI Act 

In the AI Act is government intervention through national supervisory authorities the main enforcement mechanism. These authorities can impose fines, issue binding instructions or restrictions, and even order certain AI practices to cease entirely. This is very similar to the GDPR.

However, there is a direct link with the above directives. Failure to comply with the requirements of the AI Act constitutes a breach of the relevant duty of care under the provisions of these directives. And a breach of a relevant duty of care by AI operators automatically makes them liable for damages.

Conclusion

The forthcoming AI Act and related liability directives signify transformative shifts in the EU's approach to AI regulation, blending proactive damage prevention with effective post-incident solutions. Companies must adapt proactively to these evolving EU AI regulations to foster innovation, compliance, and operational confidence. 

Seeking further insights into the Product Liability Directive, AI Liability Directive, or AI Act? Reach out to Timelex for expert guidance.