The Legal Battle Against AI-Generated Abuse: The Case of Hannah Grundy

Our client, Hannah Grundy, a Sydney woman targeted by AI-generated explicit images, has highlighted the urgent need for stronger legal protections against technology-facilitated abuse. The BBC recently reported on her distressing experience, in which a former friend, Andrew “Andy” Hayler, used artificial intelligence to create deepfake images of her and share them online without her consent. The betrayal was devastating, as Hayler had been a trusted friend for over a decade.

Hannah’s case underscores the emotional and legal challenges victims face when trying to remove explicit deepfake content and seek justice. Despite initial setbacks with law enforcement, her relentless pursuit of accountability—alongside legal representation from our criminal lawyer, Sidnie Sarang—led to the successful conviction of the offender. Hayler was sentenced to nine years in prison for his offenses against 26 women over a two-year period.

The Legal Implications of AI-Generated Abuse

Hannah’s case has ignited broader discussions about AI misuse and the need for robust legal reform to protect individuals from image-based abuse. In response to the rising threat posed by AI-generated explicit content, Australian lawmakers have strengthened existing criminal laws to combat these offenses.

The Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 introduces new provisions targeting the distribution of non-consensual deepfake material. Under this law:

  • A person who transmits sexually explicit material depicting an adult without their consent using a carriage service (such as social media or messaging apps) could face up to six years in prison.
  • The law applies whether the content is altered using AI or remains unedited.
  • The legislation aims to provide stronger legal protections for victims and ensure that offenders face serious consequences.

The passage of this law represents a critical step forward in addressing technology-facilitated abuse and aligns Australia with growing global efforts to regulate AI misuse.

AI Regulation in the Legal Profession in the aftermath of Hannah Grundy

Beyond criminal law, the legal profession itself is adapting to the challenges AI poses. The NSW Supreme Court recently introduced an AI Practice Note, which places restrictions on the use of artificial intelligence in legal proceedings. It specifically limits AI-generated content in:

  • Affidavits,
  • Witness statements,
  • Expert reports.

Legal professionals must now ensure that clients, witnesses, and experts comply with these guidelines when preparing case materials. The goal of these restrictions is to protect the integrity of legal documents and prevent inaccuracies or AI-generated misrepresentations from influencing court decisions.

hannah grundy

Global Efforts to Regulate AI

Hannah Grundy’s case is part of a wider international conversation about regulating AI. In 2023, the European Union passed the AI Act, the first comprehensive legal framework aimed at controlling high-risk AI applications, including those used for deepfake generation. This legislation focuses on:

  • Transparency in AI-generated content
  • Accountability for companies and individuals using AI
  • Restrictions on AI tools that pose a risk to fundamental rights

Closer to home, NSW has introduced the AI Assessment Framework (NSW AIAF) to guide the government’s use of AI technologies. While the framework is still in its early stages, it reflects the growing recognition that AI must be monitored and regulated to prevent harm.

The Road Ahead following Hannah Grundy Case

Hannah’s experience is not an isolated case, it is part of a larger societal issue that demands urgent legal reform. The rapid advancement of AI technology, coupled with its potential for misuse, calls for stronger regulations, better enforcement, and increased awareness to protect individuals from image-based abuse.

With new criminal laws, court guidelines, and AI governance frameworks emerging, Australia is moving in the right direction. However, more needs to be done to prevent AI-driven harm before it occurs. At O’Brien Criminal & Civil Solicitors, we remain committed to fighting for victims of online abuse and ensuring that perpetrators are held accountable.

If you or someone you know has been affected by AI-generated abuse, our team is here to help.

Criminal lawyer for digital abuse

If you or someone you know needs criminal defence, get in touch with our criminal lawyers today. To get in touch, please call (02) 9261 4281, or email . We can help arrange a free, confidential consultation for you.

Not convinced we can help you? Read our other successful criminal case studies. 

NB
author avatar
Nicole Byrne

Recommended articles

Search

O’Brien Criminal & Civil Solicitors
e: 
p: 02 9261 4281
a: Level 4, 219-223 Castlereagh St,
Sydney NSW 2000

Scroll to Top