ai-defamation-chatgpt-false-statements-about-business-owner

AI Defamation: When ChatGPT, Gemini Make False Statements

AI defamation is a growing legal issue we’re seeing in Australia. AI systems are increasingly generating biographical information, professional histories, and detailed claims about real people, and getting them wrong. These are not minor errors. They can include false allegations of criminal conduct, fabricated professional histories, and damaging claims that spread across the internet before anyone notices.

If ChatGPT, Google Gemini, or another generative AI platform has generated false statements about you, you may have legal options. This is an emerging and fast-moving area of law, but the core principles of Australian defamation law still apply. The central question, and it remains genuinely unsettled, is who bears legal responsibility when an AI system publishes false and damaging information about a real person and causes measurable harm to that person’s reputation.

This fact sheet explains the current legal framework, the key liability questions, what happened in Australia’s first major AI defamation case, and what to do if you have been affected.

Key Takeaways

  • AI-generated defamation is an emerging area of Australian law. Courts have not yet definitively resolved who bears liability when an AI system publishes false information about a person.
  • Potential defendants may include the AI company (such as OpenAI or Google), the user who prompted the output, or the platform hosting the content.
  • In 2023, Brian Hood, then Mayor of Nillumbik Shire Council, was reported as among the first people globally to threaten legal action against OpenAI over a ChatGPT hallucination, attracting significant international attention to AI defamation liability.
  • Australian defamation law does not currently contain AI-specific provisions, but the existing framework can potentially apply to AI-generated publications.
  • Preserving evidence quickly is critical, AI outputs are not always reproducible and platforms may update or remove content without notice.
  • O’Brien Criminal & Civil Solicitors can advise you on your options if an AI system has made false statements about you.

AI Defamation: Act Quickly If AI System Has Generated False Statements About You

  • Screenshot and preserve all AI outputs containing the false statement, including timestamps and URLs
  • Request removal from the AI company directly, most have feedback and content correction mechanisms
  • Document any harm caused, professional, financial, or reputational
  • Seek legal advice before taking further steps

Call O’Brien Criminal & Civil Solicitors on 02 9261 4281 for a confidential discussion about your options.

The Liability Question: Who Is Responsible For AI Defamation?

Traditional defamation law was designed around human authors making statements with intent or negligence. Artificial intelligence systems present a genuinely novel problem for the legal system because the publication chain involves multiple parties, the company that built the AI, the user who prompted it, and often a third-party platform hosting it.

The AI company. Is OpenAI or Google liable for what their systems generate? This may depend on whether the company knew or should have known the large language model was generating defamatory information about real people, whether adequate safeguards were in place to prevent defamatory outputs, and whether the company played a sufficiently active role in the publication. Australian defamation is not a strict liability tort, fault of some kind is generally relevant, which makes the knowledge and safeguards question central to whether an AI company can be held liable.

The user. Is the person who prompted the AI liable for the output? Where a user intentionally prompts an AI to generate damaging false claims about a person and then publishes or shares that output, they may face direct liability and be held accountable as a publisher. Liability in this scenario will depend heavily on what the user did with the output; simply generating a response is different from publishing it to third parties.

The platform. Is the website or service hosting the AI system liable as a publisher? Australian defamation law does not have a direct equivalent of the US Communications Decency Act Section 230 immunity framework. Platform liability in Australia is assessed by reference to publication principles and the specific role the platform played in the publication, and the innocent dissemination defence may be relevant where a platform had no knowledge of the defamatory content. Liability does not follow automatically from the fact of hosting.

Australian courts have not yet definitively answered these questions, and liability will likely be determined on a case-by-case basis as claims are tested.

Small business café owner in Sydney reads a defamatory social media post on his phone at closing time, illustrating how Instagram and TikTok defamation harms Australian businesses. Title Attribute
AI Defamation Lawyer

Australia’s First AI Defamation Case: Mayor Brian Hood

In early 2023, Brian Hood, then Mayor of Nillumbik Shire Council in Victoria, was widely reported as among the first people in the world to threaten legal action against OpenAI over a ChatGPT hallucination. ChatGPT falsely claimed that Hood had been convicted of bribery in connection with a foreign bribery scandal involving the Reserve Bank of Australia. In fact, Mr Hood had been a whistleblower in that matter, the opposite of what the AI claimed.

OpenAI initially resisted liability, arguing that the company did not author the specific statement (the user’s prompt triggered the output), that it merely hosts outputs without editorial control, and that users are responsible for verifying AI-generated content.

Hood’s solicitors sent a concerns notice to OpenAI threatening defamation proceedings. Before any case was filed, the ChatGPT output was subsequently corrected following the complaint. The matter did not proceed to litigation, but it attracted significant attention, including coverage by ABC News and international outlets, as an early example of a formal legal demand being directed at an AI company over a hallucination about a real person, and of a concerns notice producing a practical result before proceedings were commenced.

Overseas, a parallel matter involving US radio host Mark Walters, who sued OpenAI in Georgia after ChatGPT falsely associated him with embezzlement allegations, has drawn similar attention and is widely cited as a leading international test case on AI defamation liability.

The Current Australian Legal Framework On AI Defamation

Australian defamation law requires proof of publication of a defamatory statement, one that would cause a reasonable person to think less of the subject, together with identification of the plaintiff. Under the uniform defamation law reforms that have been progressively adopted across Australian states and territories, a serious harm threshold also applies in most jurisdictions, though the timing of commencement and the precise application of that threshold vary.

There are no AI-specific defamation provisions in Australian law at present. Courts applying the existing framework to AI-generated content will likely grapple with open legal questions including whether and in what circumstances an AI system’s output constitutes publication by the company that developed it, what level of knowledge or fault is required on the part of the AI company or the user, whether a company’s failure to implement adequate safeguards is sufficient to establish fault, and how defences such as honest opinion apply where there is no human author in the traditional sense.

The News Media and Digital Platforms Mandatory Bargaining Code and broader digital platforms work by the ACCC reflect a regulatory environment that is increasingly engaged with platform accountability, though neither directly addresses AI-generated defamation, and future reform in this specific area remains to be seen.

What Is an AI “Hallucination”?

AI language models generate text by predicting statistically likely sequences of words based on training data. They do not “know” things in the way a person does, and they can produce entirely fabricated information that sounds credible and specific. In AI terminology, this is called a hallucination.

Hallucinations are not random errors; they can be highly plausible, detailed, and confidently stated. A hallucination about a real person might include false allegations of criminal conduct, fabricated employment history, invented professional failures, or invented associations with wrongdoing. The fact that the AI system did not “intend” to make a false statement does not, under Australian law, automatically mean there is no liability.

Evidence Challenges in AI Defamation

AI-generated defamation raises particular evidentiary challenges that do not arise in traditional cases.

Preservation. AI outputs are not static publications. A response generated by ChatGPT today may differ from the response generated tomorrow if the model has been updated. Screenshots and screen recordings are essential, but their authenticity can be challenged. Timestamps, URL data, and detailed documentation of how the output was obtained are all important.

Authentication. Screenshots of AI outputs are easy to fabricate. Courts will require reliable authentication, which may include technical evidence about the platform, the prompt used, and the context in which the output was generated. Engaging a lawyer early to assist with proper preservation is advisable.

Proving falsity. While the false content of an AI hallucination is usually straightforward to disprove with documentary evidence, court records, employment records, public sources, proving how the false information originated in the AI’s training data is more complex.

Fault. Establishing the required fault element against an AI company is legally and technically complex. Evidence about the company’s safeguards, its knowledge of hallucination risks, and its response to reports of false content may all be relevant.

Real-World Scenarios On AI Defamation

Scenario 1: A Professional Falsely Associated with Wrongdoing

A lawyer searches their own name using an AI assistant and discovers an output stating that they were struck off the roll for professional misconduct. This is entirely false. The output has been shared by a third party on social media. The lawyer contacts the AI company, which corrects the output. A concerns notice is also sent to the person who shared the content on social media. 

Scenario 2: A Business Owner and a Fabricated Criminal History

A small business owner discovers that an AI chatbot embedded in a third-party website has been generating outputs falsely stating that the owner was convicted of fraud. The outputs appear in response to searches by potential customers. The business owner preserves the outputs with full screenshots and timestamps, makes a formal complaint to the website operator, and seeks legal advice about claims against both the website operator and the AI company that supplied the underlying model.

Scenario 3: A User Prompts the AI Deliberately

A former employee uses an AI tool to generate a false account of their former employer’s business practices and publishes the output on review platforms. The employer identifies the source. In this scenario, the user who intentionally prompted and published the false content may face direct defamation liability, and a claim against the hosting platform may also be available.

Regulatory Developments

AI content liability is attracting significant regulatory attention in Australia and internationally.

The EU AI Act, which came into force in 2024, classifies certain AI systems as high-risk and requires auditing and accountability measures. Whether and how its provisions might interact with defamation claims is a matter of ongoing legal development rather than settled application.

In Australia, the Department of Industry, Science and Resources has been developing a framework for responsible AI, and the ACCC’s Digital Platforms work has examined platform accountability in ways that are broadly relevant to AI-generated content, though neither directly addresses AI defamation liability at present.

The eSafety Commissioner has existing powers in relation to harmful online content, though these do not currently extend specifically to AI defamation. Calls for reform are increasing.

These developments suggest that clearer statutory frameworks for AI-generated defamation are likely to emerge, but the current law already provides avenues for action in appropriate cases.

Frequently Asked Questions About AI Defamation

Can I sue OpenAI or Google directly for defamatory AI output? 

Possibly, but the law in this area is not settled. Success depends on whether you can establish that the company knew or should have known the system generates false information about real people, that safeguards were inadequate, and that the company’s role was sufficient to constitute publication. These are complex questions that require careful legal analysis. The Hood case shows that a formal legal demand can produce results even before proceedings are filed.

How do I prove an AI statement is false? 

Present documentary evidence that directly contradicts the false statement, court records, employment records, professional registrations, or other authoritative public sources. The falsity of an AI hallucination is usually the easier part of the legal analysis. Establishing fault and publication is typically more complex.

Can the person who prompted the AI be sued? 

Yes, in appropriate cases. Where a user intentionally prompted an AI to generate damaging false content and then published or shared it, they may be directly liable for defamation.

Is there a faster way to get false AI content removed? 

Yes, often. Most AI companies have feedback and correction mechanisms. A formal legal request, and in serious cases a concerns notice under the Defamation Act 2005, can be effective. The Hood case is instructive, OpenAI corrected the false information following a concerns notice, without litigation proceeding.

What is the limitation period for a defamation claim?

 Generally one year from the date of first publication, with a court discretion to extend to three years where it is just and reasonable to do so. AI outputs can be generated repeatedly in response to different prompts, and this raises unresolved questions about when the limitation period begins in these cases, another reason why early legal advice is important.

What to Do Right Now

  1. Preserve all evidence immediately. Screenshot the AI output in full, including the prompt used, the platform URL, timestamps, and any identifying information about how the output was generated. Do not rely on being able to reproduce the same output later.
  2. Request removal. Contact the AI company directly through its feedback or content correction mechanism. Document your request and their response.
  3. Document your harm. Keep records of any professional, financial, or reputational consequences, lost opportunities, third parties who saw the content, and communications referencing the false claims.
  4. Seek legal advice promptly. AI defamation claims raise novel legal questions and time limits apply. Early advice ensures your options remain open.

How O’Brien Criminal & Civil Solicitors Can Help With Your AI Defamation Claim 

O’Brien Criminal & Civil Solicitors can advise you on AI defamation matters, including preserving evidence of the AI output, assessing the liability pathway against the AI company, the user, or the hosting platform, sending a concerns notice and formal demand, and pursuing legal proceedings if negotiation does not resolve the matter.

This is a new and rapidly evolving area of law. Acting early, before content is updated or removed and before the limitation period runs, gives you the best range of options.

Call O’Brien Criminal & Civil Solicitors on 02 9261 4281 or enquire online for a confidential consultation.

Get Expert Legal Advice - Free Initial Consultation

Speak directly with our experienced legal team about your case. We're here to protect your rights and guide you through every step.

Facing a similar legal challenge? Our experienced team has successfully handled cases like this. Contact us today for your free consultation.

The more details you share, the better we can prepare for your consultation and provide specific guidance for your case.

100% Confidential

No Obligation

Award Winning Team

Proven Track Record

O’Brien Criminal & Civil Solicitors
e: 
p: 02 9261 4281

a: Level 4, 219-223 Castlereagh St,
Sydney NSW 2000

© 2025 O’Brien Criminal and Civil Solicitors.  All Rights Reserved.

Scroll to Top