Can You Sue When AI Lies About You? Early U.S. Cases on AI Defamation
- Mark Addington
- 12 minutes ago
- 6 min read

AI hallucinations, defamation, and liability are no longer academic questions. Large language models are now at the center of real lawsuits in U.S. courts, and those cases are beginning to show where the risk sits for both AI developers and ordinary businesses that use or are targeted by these tools.
Below is an updated look at three American cases and what they mean for your organization.
What the early AI defamation cases show
Walters v. OpenAI (Georgia)
In Walters v. OpenAI, L.L.C., a Georgia radio host sued after ChatGPT allegedly told a journalist that he had been accused of embezzling funds in a lawsuit, even though he was not a defendant in that case. A Georgia trial court granted summary judgment in favor of OpenAI in May 2025. The court concluded that Walters could not prove key elements of defamation, including negligence or “actual malice,” and could not show concrete damages, particularly because the journalist neither believed nor republished the summary. The court also noted OpenAI’s accuracy disclaimers and efforts to warn users that outputs may be wrong.
Walters lost for very traditional reasons. The judge did not create a special AI rule, but instead applied ordinary defamation doctrine: no actionable defamatory meaning, no fault, and no real-world harm.
Wolf River Electric v. Google (Minnesota)
By contrast, LTL LED, LLC d/b/a Wolf River Electric v. Google LLC involves a private business claiming measurable financial losses. Wolf River, a Minnesota solar contractor, alleges that Google’s “AI Overview” falsely stated that the company had been sued by the Minnesota attorney general for deceptive sales practices. In reality, the attorney general’s prior enforcement action did not name Wolf River as a defendant.
According to the complaint, customers saw the AI-generated summary, concluded the company was under government suit, and canceled substantial projects, including contracts worth over $100,000. Wolf River seeks significant damages and injunctive relief in a case removed to federal court.
This matters for risk analysis. Unlike Walters, Wolf River is a private company pointing to specific lost deals and reputational harm with a clear causal story: Google’s AI told customers there was an attorney general lawsuit, customers believed it, and business was lost.
Battle v. Microsoft (Maryland)
In Battle v. Microsoft Corp., an Air Force veteran and aerospace educator sued Microsoft after Bing’s AI-assisted search results allegedly conflated him with a different person of nearly the same name who had been convicted of seditious conspiracy following attempts to join the Taliban. When users searched for “Jeffery Battle,” Bing allegedly showed a blurb that combined Battle’s professional biography with a statement that “Battle was sentenced to eighteen years in prison” for levying war against the United States, and linked to a Wikipedia page for the other man.
The plaintiff alleged “libel by juxtaposition,” arguing that Bing’s AI tool stitched together facts about two different people into a single, damaging narrative. The federal court in Maryland later granted Microsoft’s motion to compel arbitration. It stayed the case, meaning it is unlikely to result in a public merits decision. Still, it remains an important example of how AI can generate defamatory implications by merging sources rather than inventing facts from scratch.
Together, Walters, Wolf River, and Battle show three patterns: an AI “hallucination” with no provable harm, a business plaintiff alleging concrete financial injury, and an individual plaintiff whose reputation was allegedly damaged by AI combining two different identities.
How courts are treating AI-generated statements
The Walters order is still the clearest written signal from a U.S. court. The Georgia court treated ChatGPT’s output as a statement that could, in principle, be defamatory, then walked through familiar elements: falsity, defamatory meaning, fault, and damages. It granted summary judgment for OpenAI because Walters could not prove negligence or actual malice, nor could he show that anyone believed or acted on the statements.
The decision did not create strict liability for AI developers, nor did it immunize them. Instead, it applied ordinary defamation rules to AI outputs. Other commentary emphasizes that developers will still face fact-specific litigation over elements such as fault and damages, especially when plaintiffs can show that companies had notice of false statements and failed to correct them.
That leaves the platform immunity question. Section 230 of the Communications Decency Act has long shielded websites from liability for user-generated content. In Gonzalez v. Google LLC, 598 U.S. 617 (2023), the Supreme Court vacated and remanded a Ninth Circuit decision in a case challenging YouTube’s recommendation algorithms, and deliberately avoided drawing new lines about how far Section 230 reaches.
Generative AI does not fit neatly into the old Section 230 model because systems like ChatGPT, Gemini, and Copilot generate new text rather than merely hosting or ranking user submissions. Scholars and litigators are already arguing that Section 230 should not protect AI vendors when their systems fabricate or synthesize defamatory content.
For now, the emerging pattern is that courts are willing to treat AI outputs as statements subject to ordinary defamation analysis, while questions about platform immunity and “information content provider” status remain open.
Where the real risk sits for ordinary businesses
Most companies will never be defendants as AI developers, but they can still create serious exposure or suffer it.
First, employees increasingly use chatbots as informal research tools, especially to check the backgrounds of applicants, vendors, or competitors. When a manager pastes AI-generated “summaries” of supposed lawsuits, criminal charges, or regulatory actions into an email or hiring note and others rely on it, the resulting dispute will look like a traditional human defamation or discrimination case. The source being “the AI said so” is not a defense.
Second, companies can be targets of AI tools they do not control. Wolf River alleges that customers found AI-generated text in Google search results, suggesting the company had settled a deceptive-practices suit with the attorney general and walked away from substantial contracts as a result. The legal theory is straightforward: identify specific false statements, show that customers saw them, and quantify the resulting lost business.
Battle underscores a third point: AI can cause reputational harm without fabricating a lawsuit out of whole cloth. By blending two people’s biographies into one narrative, Bing’s AI allegedly conveyed that an Air Force veteran and educator was a convicted terrorist. That is not a hallucinated fact so much as a machine-created misattribution, and courts will have to decide how to analyze that within existing doctrines like defamation by implication and libel by juxtaposition.
Open questions
Key issues that remain unsettled include:
The level of fault that courts will require from AI vendors in cases involving private plaintiffs and private businesses.
How much weight courts will give to accuracy disclaimers and “do not rely” language when vendors know about persistent hallucinations.
How defamation law will intersect with product liability, unfair trade practices, and consumer protection statutes in AI-related cases.
Whether standards like “actual malice” for public figures, which doomed Walters and may constrain similar suits, will be applied differently where plaintiffs can show repeated notice and failure to correct.
None of these has a definitive appellate answer in the United States yet.
Practical steps for businesses
Even while the law develops, you can take concrete action to reduce risk:
Adopt an AI use policy that treats outputs as unverified drafts, not facts, and requires verification from primary documents or reputable databases before repeating assertions about identifiable people or companies.
Prohibit employees from relaying AI-generated “gossip” about individuals, competitors, or counterparties, and make it a policy violation to circulate unverified AI allegations internally or externally.
If you deploy customer-facing chatbots, configure them to refuse to answer questions about specific real people or competitors when technically feasible, and log prompts and outputs so you can audit and correct problematic responses.
If an AI system appears to defame your business, preserve screenshots, prompts, and timestamps; send a written notice to the vendor identifying the false statements; and document any lost sales or contracts that appear linked to the output.
Review contracts with AI vendors and, where leverage allows, negotiate representations, warranties, and indemnities related to AI-generated content and remedial obligations if harmful outputs occur.
For now, there is no separate law of “robot speech.” Courts are fitting large language models into existing defamation and platform liability frameworks. The consistent theme across Walters, Wolf River, and Battle is that human judgment still drives liability. Legal risk usually turns on what people choose to believe, repeat, and act on after an AI system speaks, not just on what the model outputs in a chat box.




Comments