Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Creator’s dilemma: Dissonance in copyright law at the heart of GenAI

Generative AI is reshaping how businesses create, scale and distribute content. From marketing copy to legal summaries and original art, we’re witnessing a seismic shift in creative workflows. But with this transformation comes legal tension — and few areas are as murky as intellectual property rights.

Creators in particular have been uniquely impacted by the rise of GenAI. Unlike traditional tools, GenAI systems can ingest and replicate vast swaths of existing creative works, often without properly licensing these creative works or even the original creator’s knowledge. This has sparked widespread concern across creative industries, as the value and ownership of their intellectual property is threatened not just by direct imitation, but by the potential displacement of human-made content in favor of algorithmically generated alternatives.

As an example, consider the recent trend of using OpenAI’s image generator to create Studio Ghibli-inspired images in a fraction of the time that it took for studio director Hayao Miyazaki to create these iconic works. Many fans have commented that this AI filter goes against the ethos of Miyazaki, and he has said regarding AI-generated art, “I strongly feel that this is an insult to life itself.”

Over the past few years, I’ve advised clients ranging from startups to Fortune 50 companies on how to use GenAI responsibly and in compliance with law. One pattern that keeps emerging is a dual challenge: creators (whether businesses or individuals) often want to protect the outputs they generate with AI tools (for commercial purposes or a myriad of other reasons), while also ensuring their proprietary content isn’t being used to train others’ models without their explicit consent.

But if more creators opt out of contributing their content to training datasets, how can GenAI models continue to improve without access to the high-quality data that makes them useful in the first place? Yes, GenAI developers can try to license every single piece of copyrighted material used in training, but given the vast volume and diversity of data required to effectively train and fine-tune large language models (LLMs) over time, is that truly realistic or operationally feasible? 

This is what I call the creator’s dilemma: under current US law and regulatory guidance, you can’t generally copyright GenAI-generated content — but others may potentially use your copyrighted works to train their models. However, the law here is still unsettled and murky at best. Here’s what is behind the conflict, and how companies can navigate it.

GenAI outputs can’t be copyrighted without sufficient human authorship 

In 2023, the US Copyright Office launched a comprehensive initiative examining the copyright law and policy issues raised by AI. One tangible result of this initiative is a three-part series analyzing these issues. Part 2, published in January 2025, focused specifically on whether GenAI outputs are eligible for copyright protection. The short answer: only if they include sufficient human authorship.

The Office reaffirmed that copyright law — based on Article I, Section 8 of the US Constitution — requires originality and human creativity. Simply generating content from a prompt like “write a children’s book about space whales” doesn’t make the result copyrightable. The Office based its guidance on the Supreme Court’s 1991 ruling in Feist Publications v. Rural Telephone Service, where the Court stated that some level of creativity must be present for a work to be protectable, but “the requisite level of creativity is extremely low; even a slight amount will suffice.”

That said, there are a few exceptions. In its guidance, the Office clarified that copyright protection may apply when:

  • The human uses the AI as a tool, exercising creative control over the output 
  • The output includes perceptible excerpts from a human-authored work 
  • The AI-generated material is modified or arranged in a creatively meaningful way 

If the AI is just enhancing an existing human draft, there’s a stronger argument for copyrightability. But when AI is generating from scratch based on vague prompts or without substantively leveraging human-generated content, it’s much harder to claim protection.

Fair use and the training data dilemma

If you can’t copyright what AI helps you make, surely others can’t train their AI on your original work — right? Not necessarily.

In a pair of cases this year — Kadrey v. Meta Platforms and a separate suit against Anthropic — federal judges dismissed authors’ claims that using copyrighted books to train LLMs was an infringement. The courts suggested that the training process might qualify as fair use — particularly when it doesn’t replicate expressive elements or directly harm the market for the original works. 

These decisions gave some comfort to GenAI developers. But they are far from definitive. 

In contrast, the Thomson Reuters v. Ross Intelligence decision reached a different outcome. There, the court ruled that Ross’s use of Westlaw summaries to train a competing AI legal product was not fair use, citing the “market substitution” test. Because the AI model was designed to compete with Westlaw directly, the use undermined the plaintiff’s market and failed fair use analysis.

However, the court expressly limited its opinion to non-generative AI, noting that Ross’ system did not generate new expressive works, but rather used editorial content to build a competing research tool. As a result, the decision may not directly apply to GenAI models trained on expressive materials like images, music or literature.

The takeaway? Fair use is context-specific. Factors include:

  • Whether the use is transformative 
  • Whether the use is commercial 
  • The nature of the copyrighted work 
  • The effect on the market for the original 

While some recent rulings (like Kadrey v. Meta) have suggested that training GenAI models on copyrighted works may qualify as fair use, the Ross case serves as a warning. Courts may take a stricter view when an AI tool competes in the same market as the material it was trained on — especially when the material contains editorial or creative structure, such as Westlaw’s headnotes and classification system. Given the lack of a consistent legal standard, changes in law or further guidance from the courts may eventually be needed to resolve these issues.

Why this paradox matters to business strategy 

This creates a strategic gap. I’ve worked with clients who invest heavily in GenAI tools to create marketing, legal or even artistic content — only to discover that they may not be able to claim copyright over the final product. Meanwhile, competitors (or the model developers themselves) might have trained their AI using publicly available copyrighted works, potentially without consent.

This puts creators in a tough spot. There’s value in using GenAI tools to accelerate work, but less clarity around how to protect that value from being copied or reused. The Office highlights a policy contradiction: while training may sometimes be justified under fair use, outputs that are substantially similar to copyrighted works (or that compete in the same market) may not be protected by fair use and could constitute infringement. The lack of clear legal standards for outputs creates a gap between what is permissible in training and what is permissible in deployment.

In light of the ongoing uncertainty in copyright law surrounding generative AI, some LLM providers have proactively offered indemnification to certain users for third-party infringement claims arising from the outputs generated by their models. These indemnification programs are designed to instill confidence in enterprise customers who may be wary of legal exposure when incorporating GenAI into their workflows. However, these protections often come with significant caveats and are generally available only to enterprise customers.  

The paradox is especially urgent for companies building AI-native products. Can you license your GenAI output to partners? Can you stop others from copying it? How do you handle client expectations around ownership? What happens if the LLM has been trained on copyrighted material and the fair use exception doesn’t apply?

These are questions I’m increasingly helping clients to think through, and they’re forcing legal and product teams to rethink how intellectual property fits into GenAI-enabled workflows.

How companies can stay ahead

Until Congress or the courts establish clearer rules, companies need to take proactive steps. Based on my experience advising across industries, here are some risk mitigation strategies that work: 

  • Use GenAI as a co-creation tool, not a replacement: The more human direction and editing involved, the stronger your copyright claim to outputs. 
  • Document human contributions: Keep records of your input during GenAI-assisted content creation to help support any IP assertions. 
  • Don’t rely solely on copyright: Consider contracts, trade secrets or trademarks to protect high-value assets. 
  • Audit how your training data is sourced: If you’re developing your own models, make sure you know what’s in the dataset and how it was obtained to mitigate the risk of copyright infringement liability. 
  • Monitor regulatory trends: Laws and regulations relating to GenAI are rapidly shifting. For example, Texas just recently passed the Texas Responsible AI Governance Act, making it the third US state to adopt a comprehensive AI law.  

A path forward 

The Office has discussed solutions like licensing frameworks, though it acknowledges this could entrench inequality by favoring large players with deep pockets. President Trump recently weighed in disfavoring the licensing approach, stating, “Of course, you can’t copy or plagiarize an article, but if you read an article and learn from it, we have to allow AI to use that pool of knowledge without going through the complexity of contract negotiations.”

The US could also potentially follow the lead of other jurisdictions like the EU, which have promulgated legal exceptions for text and data mining applications that are relevant to GenAI, but Congress has yet to act.

In the meantime, the mismatch between what can be used to train and what can be protected continues to frustrate both creators and developers. As I’ve seen firsthand, this paradox complicates how companies think about value capture and competitive advantage.

But awareness is a first step. By understanding how copyright law is evolving and adjusting internal practices accordingly, businesses can minimize risk and make smarter GenAI investments — even in an uncertain legal environment. 

This article is published as part of the Foundry Expert Contributor Network.
Want to join?


Read More from This Article: Creator’s dilemma: Dissonance in copyright law at the heart of GenAI
Source: News

Category: NewsAugust 13, 2025
Tags: art

Post navigation

PreviousPrevious post:Avoiding costly ERP and cloud systems implementation failures: A perspective from the deal roomNextNext post:Santander se apalanca en la tecnología de OpenAI para convertirse en un “banco nativo en IA”

Related posts

Authentication in the age of AI spoofing
November 14, 2025
10 reasons computer science degrees must change for the AI era
November 14, 2025
A CIO’s roadmap to fix America’s least connected city
November 14, 2025
Los líderes europeos en TI aumentarán el gasto en nubes locales en medio de las preocupaciones geopolíticas
November 14, 2025
에이전틱 AI, 신뢰 부족이 가장 큰 걸림돌로 떠오르다
November 14, 2025
기업 보안의 새 변수 ‘MCP 서버’···CISO가 파악해야 할 주요 플랫폼 18개와 주요 위협
November 14, 2025
Recent Posts
  • Authentication in the age of AI spoofing
  • 10 reasons computer science degrees must change for the AI era
  • A CIO’s roadmap to fix America’s least connected city
  • Los líderes europeos en TI aumentarán el gasto en nubes locales en medio de las preocupaciones geopolíticas
  • 에이전틱 AI, 신뢰 부족이 가장 큰 걸림돌로 떠오르다
Recent Comments
    Archives
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.