AI companies argue that their “systems make fair use of copyrighted content by transforming it into something new.”
In breaking down that statement, you’ll find a collection of persuasion techniques that propaganda researchers have been cataloging for nearly a century.
In this post, we’ll talk about how the language surrounding AI is being used to short-circuit the questions you should be asking before you trust it with anything that matters to your business.
AI Companies: Self-Interested Legal Defense as Consensus
“Systems make fair use of copyrighted content by transforming it into something new”. This reads as a statement of fact when it is actually a claim under litigation. These companies have taken a self-interested legal defense and presented it as self-evident truth. There is no qualification, no acknowledgment that this is a position being actively challenged in court.
This is a technique Robert Cialdini would recognize immediately. His work on the psychology of persuasion, cited in every serious marketing and behavioral science program for forty years, identified authority as one of the core mechanisms people use to make decisions under uncertainty. We follow the lead of people and institutions we perceive as credible. When a legal defense is stripped of its legal context and restated as plain description, it stops sounding like an argument and starts sounding like something everyone already knows. The reader is being asked to absorb the claim, rather than evaluate it.
Then there’s the second half: “by transforming it into something new.” This is the part that sounds like a conclusion when it is actually the contested premise. In logic, this is called begging the question. Whether AI outputs are “something new” or sophisticated recombinations of ingested material is the exact thing being argued in courtrooms right now. Treating it as a given is a magic trick performed with sentence structure.
Old Playbook; New Reality
Neil Postman spent his career studying how language reshapes reality. His concern was never just about technology itself but about the words wrapped around it. Postman warned that when language is used in ways that distort reality, the harm can be significant, even when the stated intentions sound reasonable. He called propaganda “a most mischievous word” precisely because people assumed it was something only governments or bad actors did. He argued it was far more common, far more subtle, and far more embedded in everyday communication than anyone wanted to admit.
If you want to see what this looks like in practice, watch the ice cream argument scene from Thank You for Smoking. In it, a tobacco lobbyist sits across from his young son and demonstrates that you don’t have to prove you’re right. You just have to prove the other person is wrong. Or better yet, you redirect the conversation until the original question disappears entirely. He reframes the argument, embeds his conclusion inside his premise, and delivers the whole thing with enough confidence that credentials become irrelevant. It’s funny. It’s also the exact same set of moves being used to sell you on the idea that copying someone’s work is the same as creating something new.
Renee Hobbs, who founded the Media Education Lab at the University of Rhode Island and literally wrote the book on propaganda education for the digital age, has spent years teaching that propaganda is not a relic of wartime posters and state-run media. It is a feature of daily life online. Her framework focuses on communication that invites us to respond emotionally rather than analytically. That framing describes exactly what is happening in the AI copyright conversation. The language is engineered to make you feel like the question has already been answered so you never think to ask it yourself.
Then there’s Chomsky and Herman’s propaganda model, which identifies the structural filters through which information passes before it reaches an audience. One of those filters is sourcing: the tendency to rely on institutional voices as default authorities. For example, when a news article writes “AI companies have argued,” it is using an institutional source as if that source is neutral, but it is not. These companies are defendants in active lawsuits. Their “argument” is a legal defense strategy, not an established fact.
What the Courts Say
The fair use question is far from settled.
In June 2025, a federal court found that using copyrighted books to train an AI model could be considered transformative. However, that same ruling drew a hard line at how those books were obtained. Downloading millions of pirated copies from shadow libraries was not fair use, regardless of what happened to them afterward. That case, Bartz v. Anthropic, resulted in a $1.5 billion settlement.
Days later, a different judge in the same district reached a similar conclusion on the training question in Kadrey v. Meta but went further on the market harm question, acknowledging that AI models could flood the market with content similar enough to the originals to undercut the authors who wrote them. The court called this a form of indirect substitution and said copyright law needs to be flexible enough to account for it.
Meanwhile, in the Thomson Reuters v. Ross Intelligence case, the court rejected the fair use defense entirely. Ross used Westlaw’s curated legal headnotes to build a competing search tool. The court found the use was commercial, not transformative, and directly competitive. No amount of “transformation” language saved it.
In early 2026, Universal Music, Concord Music Group, and ABKCO filed a $3.1 billion lawsuit against Anthropic, alleging the company built its AI on torrented piracy.
Over 70 copyright infringement lawsuits between content creators and AI developers are now pending in U.S. federal courts. No appellate court has issued a definitive ruling on fair use in this context. The question is not settled.
Why the Evangelist Counterargument Keeps Working
The most common pushback from AI evangelists goes something like this: “AI learns the way humans learn. It reads, it absorbs, it creates. Just like a person would.”
This is a false equivalence that works because it sounds intuitive. A person who reads a hundred books and then writes a novel is doing something fundamentally different from a system that ingests millions of copyrighted works, converts them into mathematical weights, and produces outputs that can compete directly with the originals in the same marketplace.
A human reader:
- does not store the full text
- does not reproduce passages when prompted the right way
- cannot flood a market with derivative content at machine speed
The analogy humanizes the machine in order to borrow the protections we extend to human creativity. That is more a framing device than an argument.
Another common statement: “The information is already out there. It’s publicly available.” This confuses access with rights. The fact that a book exists in a library does not mean you can photocopy it, rebind it, and sell it. Access and license are two completely different things. This argument relies on the listener not knowing the difference.
A third: “This is how progress works. You can’t stop innovation.” This is what Cialdini would recognize as scarcity and social proof working in tandem. It creates urgency (you’ll be left behind) while implying consensus (everyone else has already accepted this). It is an emotional lever disguised as inevitability.
Why This Matters if You Run a Business
You don’t need to be a copyright lawyer or a media theorist to care about this. If you run a business, you are already making decisions in an environment shaped by these claims.
When someone tells you an AI tool can build your website, write your content, or replace your creative team, you are hearing a version of the same persuasion framework. The language is designed to make the decision feel obvious where complexity hides behind simplicity and risks are reframed as opportunities.
The FTC has noticed. Since 2024, the agency has brought over a dozen “AI washing” enforcement actions against companies that overstated what their AI products could do. In August 2025, the FTC sued Air AI for claiming its conversational AI could replace human sales representatives and deliver unrealistic business results. The agency alleged the technology was often unable to perform basic functions like placing calls or scheduling appointments. Entrepreneurs and small businesses lost up to $250,000 each.
This is just one example of what happens when persuasion replaces substance.
“AI for Everyone”
The “AI for everyone” framing has a specific gray area that rarely gets named. Harvard researcher Maitreya Shah, a blind lawyer studying AI fairness, has found that people with disabilities are left out not just from AI tools, but from the conversations about AI fairness meant to fix them. The reform conversation reproduces the same exclusion as the original problem.
AI-generated text and images regularly misrepresent the experiences of people with disabilities and reinforce harmful stereotypes, while AI systems trained on inaccessible data perpetuate or amplify existing barriers. The statistical nature of the technology makes this worse. Disabled people are the most heterogeneous of any protected group, which means they are the furthest from the statistical average AI optimizes for, and the least likely to benefit from tools built around that average.
When these harms surface, the same language mechanics described above kick in. Discrimination against disabled people gets dismissed as anecdotal or statistically insignificant, because the harms are unique enough that cluster analysis finds no pattern. No pattern means no accountability. The framing does the work. Harm becomes invisible not because it doesn’t exist, but because the measurement framework was never built to see it.
The New York City Bar Association’s Presidential Task Force on AI said what the industry language won’t: equitable AI cannot be achieved without proactive disability-centered design, rigorous inclusion throughout development, and responsive regulation. That’s a demand for accountability that transformative technology language is specifically designed to deflect.
The Human Skills That Protect You
The people who study propaganda for a living teach you to think specifically. When you hear or read a claim, you learn to ask:
- Who is making this claim?
- What do they gain from it?
- What is being assumed rather than proven?
- What question is being skipped?
If the questions are this simple, why aren’t more people asking them?
Those four questions apply to every AI pitch you will encounter.
Postman’s entire body of work comes down to a simple warning: technology is not neutral, and the language used to describe it is never accidental. The words are chosen and the framing is deliberate. Your job, as someone making real decisions with real consequences, is to see the framing before you respond to it.
The bar for sounding qualified has never been lower. The bar for being qualified has not moved.
Sources
- Renee Hobbs, Mind Over Media: Propaganda Education for a Digital Age (W. W. Norton, 2020). Winner of the 2021 PROSE Award for Excellence in Social Sciences.
- Neil Postman, Technopoly: The Surrender of Culture to Technology (Vintage Books, 1993). Also: Renee Hobbs, “A most mischievous word: Neil Postman’s approach to propaganda education,” Harvard Kennedy School Misinformation Review, 2023.
- Robert B. Cialdini, Influence: The Psychology of Persuasion (Harper Business, revised edition, 2021). Seven principles of persuasion: reciprocity, commitment/consistency, social proof, authority, liking, scarcity, and unity.
- Edward S. Herman and Noam Chomsky, Manufacturing Consent: The Political Economy of the Mass Media (Pantheon Books, 1988). Five-filter propaganda model: ownership, advertising, sourcing, flak, and ideology.
- Harold Lasswell, Propaganda Technique in the World War (1927). Foundational work on the mechanics of mass persuasion.
- Bartz v. Anthropic PBC, N.D. Cal. (June 23, 2025). $1.5 billion class action settlement.
- Kadrey v. Meta Platforms, Inc., N.D. Cal. (June 25, 2025). Partial summary judgment on fair use; market dilution framework introduced.
- Thomson Reuters Enterprise Centre GmbH v. ROSS Intelligence Inc., D. Del. (February 2025). Fair use rejected; commercial competitive use found infringing.
- In re OpenAI Copyright Infringement Litigation, S.D.N.Y. (October 2025). Court denied OpenAI’s motion to dismiss on substantially similar outputs.
- FTC “Operation AI Comply” enforcement actions (2024 through 2026). Twelve-plus AI washing cases including Air AI, Growth Cave, Workado, and Ascend Ecom.
- Copyright Alliance, “AI Copyright Lawsuit Developments in 2025: A Year in Review” (January 2026). Over 70 active lawsuits tracked.
- New York City Bar Association Presidential Task Force on Artificial Intelligence and Digital Technologies, “The Impact of the Use of AI on People with Disabilities,” June 2025. nycbar.org
- Harvard Gazette, “Why AI Fairness Conversations Must Include Disabled People,” April 2024. news.harvard.edu
- Built In, “AI Has the Potential to Transform Accessibility. Why Hasn’t It?,” October 2025. builtin.com