Article

Apr 8, 2026

Liability for AI Errors: Who Is Responsible When an AI Gets It Wrong

When an AI system causes harm, who is legally responsible? Learn how AI liability works across product liability, negligence, and contract law — and how to protect your business.

An AI-powered medical tool recommends the wrong medication dosage. A legal research tool generates a case citation that does not exist, and an attorney files a brief relying on it. An autonomous vehicle misidentifies an obstacle and causes an accident. A hiring algorithm screens out qualified candidates in a way that correlates with race.

In each of these scenarios, an AI system produced a harmful output. In each one, someone was harmed. And in each one, a court will eventually have to answer the same question: who is legally responsible?

AI liability is one of the most actively developing areas of law right now, and the answers are not yet settled. What is clear is that the existing legal frameworks — product liability, negligence, contract law, and sector-specific regulation — all apply to AI, even though none of them was designed with AI in mind. Companies deploying AI need to understand how those frameworks apply to them before something goes wrong, not after.

Why AI liability is genuinely complicated

Most liability frameworks were built around human actors or physical products. When a product injures someone, the manufacturer, distributor, and retailer all sit in a supply chain where responsibility can be allocated based on who made what decision. When a professional gives bad advice, liability attaches to the professional who gave it.

AI disrupts both models. An AI system is not a passive product that behaves identically every time it is used. Its outputs are probabilistic, context-dependent, and change over time as the model is updated or fine-tuned. An AI system is also not a professional in any legal sense. It has no license, no duty of care established through a professional relationship, and no capacity to be held accountable directly.

This creates a gap between where harm occurs and where liability can attach. Courts are filling that gap by applying existing legal frameworks to the humans and companies in the AI supply chain: the developers who built and trained the model, the companies that deployed it, and in some cases the users who relied on it.

Product liability: when AI is a defective product

Product liability law holds manufacturers and sellers responsible for injuries caused by defective products. Courts have applied product liability principles to software for decades, and there is no reason AI systems would be treated differently.

The three traditional theories of product liability all have potential application to AI.

Manufacturing defects occur when a specific unit of a product deviates from its intended design. In the AI context, this could apply to a model deployment that behaves differently than intended due to a bug, a corrupted training run, or a deployment error. If the model that harmed a specific user behaved differently than the model as designed, a manufacturing defect theory is available.

Design defects exist when the product's design itself is unreasonably dangerous, even when manufactured correctly. For AI systems, design defect claims could arise from architectural choices that make the model prone to hallucination in high-stakes contexts, training data choices that bake in bias, or the decision to deploy a model in a context where its known error rate creates unreasonable risk. The challenge for plaintiffs is that what constitutes a "defective" AI design is not well-established, and the probabilistic nature of AI performance complicates the analysis.

Failure to warn claims arise when a manufacturer fails to adequately warn users about known risks of a product. For AI systems, failure to warn could apply to deployers who do not disclose the model's known limitations, error rates, or contexts in which its outputs should not be relied upon without human verification. Given that many AI systems are deployed without meaningful disclosure of their limitations, this may be the most viable product liability theory in the near term.

The real-world application of product liability to AI is just beginning. A 2023 case involving an AI-powered medical device drew significant attention when questions arose about whether the system's outputs constituted a product subject to liability or a professional service subject to negligence analysis. Courts in different jurisdictions are reaching different preliminary conclusions, and the doctrine is unsettled.

Negligence: the duty of care question

Negligence requires a plaintiff to show that the defendant owed a duty of care, breached that duty, and caused harm as a result. In the AI context, the duty question is where the analysis gets complex.

AI developers owe a duty of care in the design, training, and testing of their systems. A developer who releases a model known to hallucinate medical information without adequate safety testing, or who trains on biased data without evaluating the impact on protected classes, could face negligence claims for resulting harm.

AI deployers owe a duty of care to the people affected by their deployment decisions. A company that deploys an AI system in a high-stakes context — medical diagnosis, legal advice, financial decisions, employment screening — without adequate oversight, human review, or disclosure of limitations may breach its duty of care to users and affected third parties. The deployer's liability does not depend on the developer's liability. Choosing to deploy an inadequate tool in a high-risk context can itself be negligent.

The attorney and the AI hallucination cases are among the most concrete examples of AI negligence liability developed so far. Multiple attorneys have been sanctioned by courts for submitting briefs that cited AI-generated case citations that did not exist. The courts have been clear: the attorney has a professional duty to verify what they file, and delegating that responsibility to an AI without verification is a breach of that duty. The AI's error does not excuse the attorney's negligence. This principle generalizes: professionals who use AI tools in professional contexts retain the duty of care their profession imposes, and AI does not reduce that standard.

Contractual liability: what your agreements say matters

In many AI disputes, liability will be determined primarily by contract rather than tort law. The licensing agreements, terms of service, and vendor contracts governing the AI relationship contain provisions that allocate risk between the parties.

Limitations of liability in AI vendor agreements typically cap the vendor's liability at the fees paid in the prior 12 months. For AI systems where an error could cause significant downstream harm, this cap may bear no relationship to the actual loss. A medical AI error that injures a patient, a financial AI error that causes a significant trading loss, or a legal AI error that results in a malpractice claim could generate damages that dwarf the licensing fee many times over.

Indemnification provisions determine who bears the cost of third-party claims. AI vendors often limit their indemnification obligations to IP infringement claims and exclude liability for harmful outputs. Deployers may find themselves holding liability for AI-caused harm that they cannot pass back to the vendor under the contract.

Disclaimer of warranties in AI agreements typically disclaim all warranties about the accuracy, reliability, or fitness for purpose of the AI's outputs. These disclaimers may be enforceable in commercial relationships, though they may face challenges in consumer contexts where implied warranties are protected by law.

Understanding the contractual allocation of AI risk before you deploy is essential. If the vendor's liability is capped at the annual licensing fee and your potential exposure is orders of magnitude larger, you need either a better contract, additional insurance, or a different risk management strategy.

Sector-specific liability frameworks

Several industries have regulatory frameworks that create specific liability exposure for AI errors, operating alongside or in place of general tort and contract principles.

Healthcare. AI systems used in medical diagnosis, treatment recommendations, or patient monitoring operate in a regulatory environment that includes FDA oversight of software as a medical device, HIPAA data protection requirements, and state medical malpractice frameworks. Liability for AI medical errors is likely to be analyzed under malpractice standards for clinical decisions and product liability standards for the device itself, with the allocation depending on how closely the AI was integrated into the clinical judgment.

Financial services. AI systems used in investment advice, credit decisions, or trading operate under SEC, FINRA, CFPB, and Fair Credit Reporting Act frameworks. Adverse decisions driven by AI must meet the same adverse action notice requirements as human decisions. AI-generated investment advice may be subject to fiduciary duty analysis. Algorithmic trading errors carry liability exposure under securities law.

Employment. AI hiring and performance tools face liability under Title VII, the ADEA, the ADA, and state equivalents if they produce discriminatory outcomes. The EEOC has been explicit that employers are responsible for the discriminatory outputs of AI tools they use, even tools purchased from third parties. The employer cannot pass liability to the AI vendor simply because the vendor built the tool.

Autonomous systems. AI systems that operate in the physical world, including autonomous vehicles, drones, and robotic systems, face product liability exposure for physical harm caused by errors. Existing product liability doctrine applies, but the probabilistic nature of AI decision-making creates novel questions about what constitutes a defect and how causation is established when the system's decision-making process is opaque.

The role of explainability in liability

One of the most practically significant liability issues in AI is explainability: the ability to understand and articulate why an AI system reached a particular output. Explainability matters for liability in several ways.

Courts expect parties to be able to explain the basis for consequential decisions. If an AI system makes a decision that is challenged in litigation and the deployer cannot explain how the system reached that output, the inability to explain may itself support liability or at minimum create a significant evidentiary problem.

Regulators increasingly require explainability for AI decisions in regulated contexts. The CFPB requires that adverse action notices in credit decisions specify the reasons for the decision. GDPR requires that automated decisions affecting individuals be explainable upon request. The EU AI Act imposes explainability requirements on high-risk AI systems. Operating an AI system that cannot meet these requirements creates regulatory liability independent of any specific harm.

Investing in explainable AI design, maintaining audit logs of model decisions, and building the technical capacity to reconstruct why a model reached a particular output are not just good engineering practices. They are risk management tools with direct legal significance.

How to manage AI liability risk in your business

Conduct a deployment risk assessment. Before deploying any AI system in a context where its errors could cause harm, assess the probability of error, the severity of potential harm, the population affected, and what human oversight is in place. High-stakes deployments require proportionately rigorous oversight.

Implement human review for consequential decisions. Across every sector and every liability framework, the consistent legal principle is that humans retain responsibility for consequential decisions. Build human review into AI workflows where errors could cause significant harm, and document that review process.

Review and negotiate your AI vendor agreements. Understand the liability caps, indemnification scope, and warranty disclaimers in your vendor contracts before you deploy. If the contractual risk allocation is inadequate for your use case, negotiate better terms or supplement with insurance.

Maintain audit trails. Log AI inputs, outputs, and the human decisions made in connection with AI recommendations. These records are your evidence in disputes and your defense in regulatory inquiries.

Disclose AI use and limitations. In consumer-facing and professional contexts, disclose that AI is being used, what its role is, and what its known limitations are. Adequate disclosure is both an ethical obligation and a legal defense against failure-to-warn and deception claims.

Obtain AI liability insurance. The insurance market for AI liability is developing rapidly. Coverage for AI errors, including coverage for regulatory defense costs, third-party claims arising from AI outputs, and IP infringement claims, is available and worth evaluating for companies with significant AI deployment.

Frequently asked questions

Can an AI system itself be held liable?

No. Under current law, AI systems have no legal personhood and cannot be defendants in civil litigation or charged with crimes. Liability attaches to the humans and companies that develop, deploy, and use AI systems. This is expected to remain the case unless and until legislatures create some form of legal status for AI entities, which no major jurisdiction has done.

If an AI vendor's model caused the harm, can I sue the vendor?

Potentially, but the answer depends heavily on your contract with the vendor, the specific nature of the harm, and the applicable legal theory. Product liability claims may be available against the vendor as the model developer. Negligence claims may be available if the vendor was careless in design or testing. Contractual claims depend on what the agreement provides. In many cases, deployers will face liability to harmed third parties and then need to seek contribution or indemnification from vendors in a separate proceeding.

Does it matter whether the AI was the sole cause of the harm or a contributing factor?

Yes. Courts apply causation analysis to AI harm claims just as they do to other tort claims. If the AI's output was one of several contributing factors, the liability analysis will account for the relative contributions of the AI error and other causes. Human decisions made in reliance on AI output are particularly important: if a professional reviewed an AI recommendation and chose to follow it without verification, the human's decision may be the proximate cause of harm even if the AI's recommendation was wrong.

How do I protect my company if a customer is harmed by an AI output in my product?

A layered approach is most effective: clear terms of service that disclose AI use and disclaim warranties for AI outputs, human review requirements built into high-stakes workflows, adequate insurance coverage, audit trails that document what the AI produced and what humans decided, and vendor agreements that provide appropriate indemnification for model-level defects. No single measure eliminates the risk, but together they substantially reduce both liability exposure and litigation cost.

Is AI liability treated differently for consumer versus commercial users?

Generally yes. Courts and regulators apply greater protections to consumers than to sophisticated commercial parties. Disclaimers and limitation of liability clauses that are enforceable in commercial contracts may be unenforceable against consumers under state consumer protection laws. The FTC's authority to pursue unfair and deceptive practices is specifically focused on consumer harm. Companies deploying AI in both consumer and commercial contexts need to account for the different legal standards that apply in each.

AI liability law is developing faster than most companies realize, and the direction of travel is clear: courts, regulators, and legislatures are all moving toward holding AI developers and deployers accountable for the harm their systems cause. The companies that manage this risk proactively — through thoughtful deployment practices, appropriate contracts, human oversight, and adequate insurance — will be far better positioned than those that treat liability as someone else's problem until it lands on their doorstep.

If you want to assess your company's AI liability exposure or build a risk management framework around your AI deployments, contact Ana Law to schedule a strategy session.

Ana Law intellectual property law firm logo

Contact Ana Law®

212.205.6700 | hi@analaw.com

75 E 3rd Street, Sheridan WY

1300 Pennsylvania Ave NW Suite 700, Washington DC 20004

*by appointment only

Ana Law intellectual property law firm logo

Contact Ana Law®

212.205.6700 | hi@analaw.com

75 E 3rd Street, Sheridan WY

1300 Pennsylvania Ave NW Suite 700, Washington DC 20004

*by appointment only

Attorney Advertising. Previous results do not guarantee similar outcomes.

© 2022-2025 Ana Law LLC. All rights reserved.

Attorney Advertising. Previous results do not guarantee similar outcomes.

© 2022-2025 Ana Law LLC. All rights reserved.

Terms of Use | Privacy Policy