Article
Apr 8, 2026
Who Owns AI-Generated Content? A Legal Guide for Tech Companies
Who owns AI-generated content? Learn how copyright law treats AI outputs, platform terms, and what it means for your business and IP rights.
Your marketing team uses an AI tool to write product copy. Your engineers use a code generation model to accelerate development. Your design team uses an image generator to produce campaign visuals. The outputs are good. You publish them, ship them, sell products built on them.
Now the question: who owns any of that?
The answer matters more than most companies realize. If AI-generated content isn't protected by copyright, competitors can copy it freely. If the AI tool's terms of service assign ownership to the platform rather than the user, your business is building on IP it doesn't control. And if your AI system was trained on copyrighted material without authorization, the content it generates may carry infringement risk you haven't accounted for.
This is not a hypothetical problem. It is an active legal question with real stakes, and the law is developing faster than most legal guides acknowledge. Here is where things actually stand.
The foundational rule: copyright requires human authorship
US copyright law protects original works of authorship. The Copyright Act doesn't define "authorship" explicitly, but courts and the Copyright Office have been consistent for decades: authorship requires a human being. Works created entirely by machines, animals, or natural processes are not copyrightable.
The Copyright Office formalized this position in a 2023 guidance document, stating that it will not register works produced entirely by AI without human creative control. The key phrase is "creative control" — which the Office defined as selecting or arranging the AI's outputs, or providing sufficiently specific creative direction that the human expression is reflected in the final work.
The practical implication: a work generated entirely by an AI system in response to a generic prompt is likely not copyrightable. A work where a human made meaningful creative choices — about the prompt, the selection among outputs, the editing and arrangement of the result — may qualify for copyright protection to the extent of those human contributions.
This is not a bright-line rule. It is a spectrum, and where any given AI-generated work falls on that spectrum depends on the specific facts of how it was created.
The Thaler cases: the clearest judicial statement so far
The most directly relevant case on AI and copyright is Thaler v. Perlmutter, decided by the US District Court for the District of Columbia in 2023. Stephen Thaler — the same researcher behind the DABUS patent cases — sought copyright registration for an image generated entirely by his AI system called DABUS, listing the AI as the author.
The Copyright Office refused registration. The district court affirmed, holding that human authorship is a prerequisite for copyright protection under current law. The court declined to extend copyright to AI-generated works without congressional action, noting that the human authorship requirement has deep roots in copyright doctrine and that any change to it is a legislative, not judicial, question.
The decision is on appeal, but it represents the clearest judicial statement to date: pure AI authorship does not qualify for copyright protection in the United States.
What Thaler establishes is the outer boundary. The more practically important question for most companies is not whether AI alone can hold copyright — it's whether and how human creative involvement in an AI-assisted workflow generates protectable rights.
The human-in-the-loop question
The Copyright Office's 2023 guidance introduced a framework for evaluating human authorship in AI-assisted works. The analysis turns on whether the human's creative choices are perceptible in the final output.
Prompt engineering alone is generally not enough. Writing a text prompt and accepting whatever the AI produces is unlikely to constitute sufficient human authorship for copyright purposes, at least under current guidance. The Copyright Office has compared prompts to instructions given to a commissioned artist: the person giving instructions doesn't become the author of the resulting work.
Selection and arrangement can create protectable expression. A human who reviews dozens of AI outputs and selects specific elements, arranges them in a particular way, or combines them with human-created content may generate copyright in the selection and arrangement — even if the individual AI outputs are not protected. This is analogous to how a database compiler can hold copyright in the organization of data that itself is not copyrightable.
Meaningful editing creates protectable expression. A human who substantially modifies an AI output — rewriting, restructuring, adding original elements — can hold copyright in the modified version to the extent of the human contributions. The AI-generated portions remain unprotected, but the human additions are protectable.
Specific creative direction may generate protectable output. This is the most contested area. If a human provides highly specific creative direction — detailed descriptions of composition, style, subject matter, and arrangement — such that the AI output is essentially a mechanical execution of the human's creative vision, the Copyright Office has suggested (without definitively ruling) that copyright may attach. The specificity threshold for this theory is not yet clearly defined.
The practical upshot for companies: document the human creative decisions made throughout any AI-assisted content workflow. The more you can show that humans made meaningful choices about what the content would be — not just that they ran a prompt — the stronger your copyright position.
Platform terms of service: who owns what you generate
Even where copyright could attach to AI-generated content, the terms of service of the AI platform you're using may affect who holds that copyright.
The major AI platforms take different approaches, and their terms change regularly. As of 2026:
OpenAI assigns ownership of outputs to the user, subject to its usage policies. Users own what they generate, but OpenAI retains a license to use the content to improve its models.
Midjourney historically granted users a license to use generated images but retained rights to the images themselves for non-enterprise subscribers. Enterprise subscribers receive greater ownership rights. The specifics depend on the subscription tier and the current version of the terms.
Adobe Firefly is designed for commercial use and structured to give users clear ownership of generated content, partly because Adobe trained the model on licensed and public domain content specifically to reduce infringement risk.
GitHub Copilot generates code under terms that assign the output to the user, but the terms include important caveats about not generating and then misusing outputs that reproduce licensed code verbatim.
Three things every company using AI tools for content generation should do: read the current terms of service for every platform you use, review those terms whenever you upgrade your subscription or the platform publishes a terms update, and flag any terms that assign ownership to the platform rather than the user before you build products or campaigns around that content.
Do not assume that paying for a platform means you own what it generates. That assumption is wrong for several major platforms and could leave your business building on IP it doesn't control.
The training data problem: infringement risk in AI outputs
Separate from the ownership question is the infringement question. If an AI system was trained on copyrighted content without authorization, there is a live legal debate about whether its outputs infringe on the training data.
Several major lawsuits are working through US courts on exactly this question. The New York Times sued OpenAI and Microsoft in late 2023, alleging that training on Times content without licensing constitutes copyright infringement and that certain model outputs reproduce Times content verbatim. Getty Images sued Stability AI alleging similar claims about image generation. A consolidated class action involving authors whose books were used in training data is proceeding in California.
None of these cases has produced a final merits ruling yet. The legal theories being tested include direct infringement through training, contributory infringement through outputs, and whether AI generation constitutes fair use. The fair use analysis is particularly complex — courts will need to apply the four-factor test to a genuinely novel fact pattern, and reasonable legal minds disagree about how it should come out.
What this means for companies using AI-generated content commercially:
Assess the training data provenance of the tools you use. Platforms that were trained on licensed content, public domain content, or content generated with rights holders' consent carry lower infringement risk than platforms that scraped the web indiscriminately. Adobe Firefly and Getty's Generative AI are examples of tools designed with this in mind.
Understand that indemnification provisions vary. Some AI platforms offer limited indemnification for copyright claims arising from their outputs. Read these provisions carefully — they typically have significant limitations and are not a substitute for your own risk assessment.
Evaluate your use case. Using AI-generated content for internal drafts carries different risk than publishing it at scale or building a commercial product on top of it. The higher the commercial stakes and the more widely distributed the content, the more carefully you should evaluate the infringement exposure.
What companies should do right now
Audit your AI tools. Identify every AI platform your teams are using to generate content — copy, images, code, design assets, video, audio. For each one, review the current terms of service and understand the ownership structure.
Document human creative decisions. For content where copyright protection matters, create a record of the human choices made throughout the workflow. This documentation supports copyright claims and demonstrates the human authorship that distinguishes protectable from unprotectable content.
Assess training data risk for your highest-stakes content. For content you plan to commercialize significantly — product packaging, advertising campaigns, software products — understand where the AI's training data came from and what indemnification, if any, the platform provides.
Build an AI use policy. Companies without an internal policy governing AI tool use are creating legal exposure they don't have visibility into. An AI use policy should address which tools are approved for which use cases, what human review is required before AI-generated content is published or deployed, and how ownership and attribution are handled internally.
Get ahead of the regulatory curve. The FTC, the Copyright Office, and Congress are all actively working on AI content issues. The legal landscape will look different in 12 months than it does today. Companies that have built internal governance structures around AI content use will adapt faster than those that haven't.
Frequently asked questions
Can I copyright an image I created using an AI image generator?
It depends on how much human creative input went into the final image. A simple text prompt that generates an image is unlikely to produce a copyrightable work under current Copyright Office guidance. Significant human involvement — extensive editing, selection and arrangement of multiple outputs, combination with human-created elements — may generate copyright in the human contributions. The Copyright Office evaluates these on a case-by-case basis.
If I own the copyright to AI-generated content, can I register it?
The Copyright Office will register works that contain sufficient human authorship. For AI-assisted works, applicants are required to disclose the AI's involvement and describe the human creative contributions. The Office will register the human-authored portions and exclude purely AI-generated elements from the registration.
Does using an AI tool trained on copyrighted material mean my outputs infringe?
Not automatically, but the question is genuinely unsettled. The major litigation on this issue has not produced final rulings. The risk level depends on the specific tool, how it was trained, and whether your outputs reproduce protected expression from the training data. Using tools trained on licensed or public domain content materially reduces this risk.
What happens to AI-generated content that isn't protected by copyright?
It enters the public domain immediately. Anyone can copy it, use it, and build on it without your permission. This is why the ownership question matters commercially: if your competitors can freely reproduce your AI-generated marketing content, product descriptions, or design assets, you have no IP leverage to stop them.
Are there countries where AI-generated content is protected by copyright?
Some jurisdictions take a more flexible approach. The UK historically extended copyright protection to computer-generated works through a provision in its Copyright, Designs and Patents Act, though recent cases have raised questions about how that provision applies to AI. China has seen court decisions granting copyright protection to AI-generated works in certain circumstances. The international landscape is fragmented, and companies operating across borders need jurisdiction-specific advice.
The law around AI-generated content ownership is developing in real time. What is clear today is that human authorship matters, platform terms matter, and training data provenance matters — and companies that haven't thought carefully about all three are taking on IP risk they can't see.
If you want to assess your company's AI content exposure or build a governance framework that protects your IP position as the law develops, contact Ana Law to schedule a strategy session.