When an autonomous AI agent causes harm inside the European Union, the legal question is not whether someone is liable. It is who, and under which instrument. The answer depends on the agent, the harm, the sector, and, increasingly, on whether the operator held the documentation required by the AI Act.
Key takeaways
- The Revised Product Liability Directive (EU) 2024/2853 includes software, and therefore AI systems, within its definition of a product.
- The AI Act is not itself a liability statute. Breaches of its duties support negligence claims under national law.
- Liability is shared between providers and deployers. Provider-side defects and deployer-side failures of oversight are treated as separate grounds.
- Early German, French, and Dutch case law treats autonomous decisions as the acts of the deployer for attribution purposes.
- The proposed AI Liability Directive was withdrawn in February 2025. The liability framework now rests on the Product Liability Directive, the AI Act, and national tort law.
Two recent directives, one revised
European liability law for AI systems has two recent anchor points. The first is the Revised Product Liability Directive, Directive (EU) 2024/2853, which replaced the 1985 directive in November 2024 and entered into force across the Member States on a two year transposition timetable. The second is the proposed AI Liability Directive, which was withdrawn by the Commission in February 2025 after the inter-institutional negotiations stalled. The withdrawal does not mean the liability question is settled. It means the operative instruments are now the Product Liability Directive, the AI Act, and national tort and contract law.
The Revised Product Liability Directive is the most significant change in general civil liability law in the Union in a generation. It does three things that matter for AI systems. It brings software within the definition of a product. It extends the class of persons who can be held liable to include authorised representatives and, in some cases, fulfilment service providers. And it introduces a presumption of defectiveness in cases where the claimant cannot produce technical evidence because the defendant has refused to disclose documentation that Union law required it to hold.
This last point is quiet but consequential. A claimant who suffers harm and cannot access the technical file is not left empty-handed. The court can presume that the product was defective, shifting the burden to the defendant to rebut the presumption. For AI systems, where the technical complexity is often used as a shield, this rule materially changes the practical bargaining position of a harmed party.
Provider liability
A provider, in the AI Act's terms, is any natural or legal person that develops an AI system or has one developed and places it on the market or puts it into service under its own name or trade mark. Under the Revised Product Liability Directive, the provider of an AI system is the manufacturer of a product, and therefore strictly liable for harm caused by a defective product without any need to prove fault.
The test for defectiveness is not whether the system worked as intended. It is whether the product provided the safety that a person was entitled to expect. Article 7 of the directive lists the factors that go into that assessment: the presentation of the product, the reasonably foreseeable uses, the moment at which the product was placed on the market, the effect of other products expected to be used with it, and the specific reasonable expectations of persons to whom the product is supplied. For an AI system, the list requires the court to consider the foreseeable behaviour of the model, the presentation in the instructions for use, and the interaction with other systems in the deployment environment.
A provider is not liable for every unexpected behaviour. The directive preserves a development risk defence, which a Member State may choose to include or exclude in transposition. The defence is available where the state of scientific and technical knowledge at the time of placing on the market was not such as to enable the defect to be discovered. In the AI Act context, the defence will be argued, but it will be argued against a background of rapidly published research on model failure modes. The longer a class of failure mode has been known in the research literature, the weaker the defence becomes.
Deployer liability
The deployer, in the AI Act's terms, is the organisation that uses the AI system under its authority in the course of a professional activity. Under the Revised Product Liability Directive, the deployer is usually not the manufacturer of the product, and therefore not the primary target of a strict liability claim. The deployer's exposure runs under the general law of negligence and, where the AI Act applies, the duties in Article 26.
A deployer is liable in negligence if the deployer owed a duty of care to the harmed party, breached that duty, and caused the harm. The AI Act supplies the duty side of the analysis in many cases. Article 26(1) requires the deployer to use the system within the parameters of the instructions for use. Article 26(2) requires human oversight by named persons. Article 26(5) requires monitoring and incident response. Each of these duties is, in the national tort context, a statutory standard against which the deployer's conduct is measured.
A deployer who followed the instructions but failed to supervise, or who supervised formally but failed to respond to a known incident, can be liable on negligence grounds even where the provider is separately liable for a product defect. The two heads of liability are not mutually exclusive. In practice, many claims will plead both routes and let the court allocate the loss between the defendants.
The position in the national courts
National case law on AI-related liability is still thin, but the early pattern is instructive. In Germany, where the civil code's approach to causation is more claimant-friendly than in many common law systems, the Landgerichte have accepted that a deployer of an automated decision system bears attribution for the decisions the system produces, even where the deployer did not review each output. The reasoning treats the decision to use the system as the act of the deployer, and the system's autonomy as a factor that raises the standard of care.
In France, the Cour de cassation has not yet ruled on an AI-specific case, but the Conseil d'Etat has issued several opinions on the use of algorithmic systems in the public sector. The pattern in those opinions is consistent with the German approach. The deployer is treated as accountable for the outputs, and the autonomy of the system is not treated as an exoneration.
In the Netherlands, the Rechtbank Den Haag's judgment in the SyRI case remains the leading authority on the deployment of automated systems in public administration. The case was decided under the European Convention on Human Rights rather than the AI Act, but its reasoning has been cited in several subsequent cases for the proposition that a system's technical complexity does not reduce the deployer's duty to explain how the system works and to justify its use.
The role of documentation
The single most consequential practical point about AI liability under the new framework is the role of documentation. The Revised Product Liability Directive's presumption of defectiveness in the face of undisclosed documentation, combined with the AI Act's insistence that deployers hold a minimum file under Article 26, makes documentation a first-order liability variable. An operator who holds the file is in a materially stronger position in any subsequent proceedings. An operator who does not hold it, or who holds it but cannot produce it, is in a materially weaker position.
This is the practical shift that operators often miss. Documentation under the AI Act is not a compliance formality. It is the evidence that supports or defeats a claim. Courts, regulators, and insurers have all begun to treat the file as the first thing they ask for. The form of the question is always the same: show me what you knew about this system, when you knew it, and what you did about it.
How insurance responds
European insurers are beginning to price AI agent exposure, but the market is still early. The dominant pattern is to exclude autonomous AI systems from existing general liability and professional indemnity policies, and to offer a separate line with a limited capacity. The conditions on the separate line typically include an attestation of Article 26 compliance, evidence of a logging and incident response procedure, and a representation that the system has not been reclassified under Article 25.
This is not an accident. Underwriters are building the questionnaire around the documents that the court will ask for. The operator that holds the file has a path to coverage. The operator that does not hold the file is, for practical purposes, outside the market. We have described the underwriting architecture in detail in the liability framework, and the nine-document file that satisfies both regulatory and underwriting questions in how to document AI agent risk management for compliance.
The question of general purpose AI
A separate set of questions arises for general purpose AI models. Title VIII of the AI Act imposes obligations on providers of general purpose AI models, including an additional set of duties for models with systemic risk, defined initially as those trained on more than a specific floating point operation threshold. The Product Liability Directive applies to general purpose models as products, but the liability analysis becomes more complicated when the model is the substrate of an autonomous agent built by a downstream deployer.
Two principles are beginning to settle. First, the provider of a general purpose model is not liable for every downstream use, but is liable for defects in the model itself, measured against the state of the art and the foreseeable range of uses. Second, the downstream deployer who builds an agent on top of a general purpose model is treated as a deployer of a new system, and carries the Article 26 duties on that new system regardless of the model underneath. A downstream deployer cannot hide behind the provider's documentation. The deployer's own technical and organisational measures are what the supervisor and the court will examine.
Practical conclusions
Three conclusions follow from the current state of the framework. First, the liability analysis for AI agents in the European Union is multi-source. It runs through the Product Liability Directive, the AI Act, and national tort and contract law. No single instrument resolves it. Second, documentation under the AI Act is not a compliance ritual. It is the primary evidentiary record for any subsequent proceeding, and its absence creates a presumption of defectiveness under the Product Liability Directive. Third, the operator's position is defensible only to the extent that the operator held the documentation, exercised the oversight, and responded to incidents. Anything else is, in practice, unprotected.
Related reading
For the statutory reading of Article 26 in full, see the 2026 compliance guide to operator obligations. For the documentation architecture, see how to document AI agent risk management for compliance. For the plain reading of the operator provisions, see the operator provisions of Regulation 2024/1689. For the underwriting implications, see the liability framework.
Frequently asked questions
Who is legally liable when an AI agent causes harm in the EU?
Liability is allocated across providers, deployers, and, in some cases, users. The Revised Product Liability Directive treats software, including AI systems, as a product for the purposes of no-fault liability for defective products. The AI Act adds sector-specific obligations that, if breached, support negligence-based claims. In practice, a harmed party can frame a claim under either route, and many claims will use both.
Does the Revised Product Liability Directive cover AI systems?
Yes. Directive (EU) 2024/2853 expressly includes software, and therefore AI systems, within its definition of a product. A claimant who suffers harm caused by a defective AI product can bring a no-fault claim against the manufacturer, and in certain circumstances against the provider of a digital service used to operate the product.
What is the AI Liability Directive, and is it in force?
The proposed AI Liability Directive was withdrawn by the European Commission in February 2025 after inter-institutional negotiations stalled. The underlying policy questions remain live, and several of its provisions, particularly those on access to evidence and presumptions of causation, are being discussed for inclusion in other instruments. For now, the liability framework for AI in the EU rests on the Revised Product Liability Directive, the AI Act, and national tort and contract law.
Can a deployer be held liable if they followed the provider's instructions?
Compliance with the provider's instructions is relevant but not a complete defence. A deployer still owes the Article 26 duties of oversight, monitoring, and incident response, and the general law of negligence still applies. A deployer who followed the instructions but failed to supervise the system, or failed to respond to a known incident, can be liable on negligence grounds even if the provider is separately liable for a product defect.
How do national courts treat autonomous decisions by AI agents?
Early case law in Germany, France, and the Netherlands has treated autonomous decisions as the acts of the deployer for the purposes of attribution. Courts have been unwilling to accept the argument that the deployer cannot be held accountable because the system acted on its own. Instead, the autonomy of the system has been treated as an aggravating factor that raises the standard of care expected of the deployer.
References
- Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products, OJ L, 18.11.2024.
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), OJ L, 12.7.2024.
- European Commission, Withdrawal of the Proposal for a Directive on AI Liability, COM(2022) 496, withdrawn February 2025.
- Directive 85/374/EEC on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, repealed and replaced by Directive (EU) 2024/2853.
- Rechtbank Den Haag, C/09/550982 / HA ZA 18-388, judgment of 5 February 2020 (SyRI).
- Article 3, 14, 25, 26, 27, and Title VIII, Regulation (EU) 2024/1689.
- Article 7, Directive (EU) 2024/2853, test for defectiveness.
- Article 10, Directive (EU) 2024/2853, presumption of defectiveness and access to evidence.