Digital Abuse and the Challenges of AI-Generated Child Sexual Abuse Material
Artificial intelligence is shaping the world around us in every conceivable way, particularly in the ways we create, share, and consume content online.
With the rise of generative tools, new media can now be produced in minutes, often with a single prompt. As emerging technologies continue to push new boundaries, the potential to unlock new possibilities and improve our lives seems greater than ever.
Yet in the shadows of AI’s immense potential, lies an equally tremendous potential for misuse – and with this, a more nefarious threat has ostensibly emerged.
Increasingly, generative AI tools are being misused to create sexualised depictions of children. What once required specialist skills can now be done with widely available software or mobile apps. With the barrier to producing abusive content lowered, there has been a sharp increase in the circulation of AI-generated Child Sexual Abuse Material (CSAM) online. Within a single month, it was reported by the Internet Watch Foundation that over 20,000 AI-generated abuse images were shared on a dark web forum.
These developments raise pressing legal questions for lawmakers, regulators and law enforcement: Can the law treat AI-generated material the same as traditional CSAM? Who should be held responsible? And how can legal frameworks balance child protection with legitimate uses of AI?
Understanding AI-Generated CSAM
AI-generated CSAM is not a single category of material. Broadly, it can be divided into four types:
Wholly synthetic images: AI produces fictional children in sexualised scenarios. No real child is involved, but the photos can appear realistic and circulate like traditional CSAM.
Morphing or hybrid images: AI blends images of adults and children to create sexualised depictions, often designed to bypass detection software.
Cartoon or animated material: Stylised depictions of minors generated by AI, ranging from crude sketches to highly detailed CGI images. While the subjects may not exist, these images normalise the sexualisation of children.
Deepfakes of real children: A particularly concerning trend is the rise of “nudification” apps that use AI to strip clothing from images digitally. Whilst these apps claim to be for fun or entertainment, predictably, they are frequently misused to target minors. A 2020 investigation by Sensitivity AI exposed a deepfake ecosystem on Telegram using AI-powered bots to create more than 100000 non-consensual images, many depicting underage individuals.
Many of these altered images originate from innocent social media photos posted by minors, where they frequently share personal images. They are then transformed into explicit deepfake content and disseminated across dark web forums, encrypted chat groups, and social media platforms, making detection and removal increasingly difficult.
Understanding the different ways AI is used to generate CSAM is crucial because legal responses and the severity of penalties can vary depending on the nature of the material. Courts and legislators often consider whether a real child is depicted, whether the image is hyper-realistic, and whether it is likely to be consumed by offenders.
Harms Beyond the Image
AI-generated CSAM is illegal, traumatising and socially corrosive. Even when no real child is involved, very real, serious harm can occur, both individually and systemically.
For children depicted — whether through deepfakes, manipulated photos, or look-alike models — the experience can feel like a direct assault on their autonomy and dignity. The resulting images expose them to humiliation, bullying, reputational damage, and ongoing trauma. Offenders often weaponise fake material for blackmail or grooming, compounding psychological harm and making children feel powerless.
Beyond individual victims, synthetic CSAM fuels broader risks. Its production and circulation normalise the sexualisation of children, create demand that may escalate into contact offences, and complicate law enforcement’s ability to distinguish between real and fake imagery. Once uploaded, such material spreads rapidly, often beyond the reach of takedown efforts, prolonging victims’ distress.
Ultimately, AI-generated CSAM undermines both child safety and public trust. While many jurisdictions already recognise the reputational, psychological, and societal harm that it produces, legal systems and technology providers must treat it with the same seriousness as recorded abuse.
Australia’s Legal Framework
Australia already has robust laws criminalising CSAM, broad enough to capture AI-generated material. At the Commonwealth level, the Criminal Code Act 1995 (Cth) prohibits producing, possessing, or distributing child abuse material. Notably, the law includes material that “appears to depict” a person under 18 engaged in sexual activity, allowing prosecution of specific AI-generated images.
At the state level, New South Wales’ Crimes Act 1900 (s 91FA) similarly criminalises the production, possession, or distribution of child abuse material, encompassing digital, animated, and computer-generated content.
However, gaps remain. Current laws focus primarily on end-users — those who create, possess, or share images. Developers of AI tools capable of generating CSAM and platforms that distribute these tools are not explicitly addressed. This creates potential loopholes – someone could produce software designed to develop child abuse imagery without ever directly creating or sharing it, raising questions of liability under existing laws.
Recent Cases
A string of high-profile cases involving AI-generated CSAM has pulled the issue sharply into the public focus, across all jurisdictions in Australia.
A former Queensland police officer, Lewer was arrested in March 2024 for allegedly using a carriage service to access child exploitation material, including AI-generated images and text-based abuse. He faces up to 15 years in prison if convicted. His sentencing is scheduled for December 2025.
In July earlier this year, a man in NSW pleaded guilty to charges involving accessing and possessing child abuse material, including over 22,000 files featuring sexual abuse of pre-pubescent children. Some of the material involved AI-generated content. He faces up to 15 years in prison, with sentencing set for November 2025.
Meanwhile, in Victoria last year, a Melbourne man was sentenced to 13 months’ imprisonment for online child abuse offences, including using an AI program to produce nearly 800 child abuse images.
Together, these cases emphasise the need for reform so that legal frameworks and enforcement mechanisms can adequately address the unique challenges posed by AI-generated CSAM.
Efforts at Reform
Recognising the gaps exposed by these cases, policymakers have begun proposing reforms. In 2025, Independent MP Kate Chaney introduced a bill aimed at criminalising the creation and distribution of AI tools designed to produce CSAM. The bill also sought higher penalties for possessing AI-generated deepfakes that use real children’s likenesses. While it has not yet passed, the initiative signals a growing awareness that current laws may not fully address AI-enabled abuse.
The eSafety Commissioner has also advocated for stronger obligations on platforms to detect and remove synthetic CSAM, emphasising the need for proactive measures in addition to criminal penalties. Together, these efforts suggest that Australia is moving toward a multi-layered approach that combines enforcement, platform regulation, and legislative clarity.
The International Context
The reforms currently being proposed in the Australian context form part of a global conversation around AI intelligence and its impact on children.
Leading the charge against the rising threat of AI-generated CSAM is the UK. Under new legislation, it will become the first country to criminalise the possession, creation and distribution of AI tools designed to generate CSAM. The legislation will also outlaw AI-generated paedophile manuals, with offenders facing up to three years in prison.
Similarly, in Europe, the EU’s proposed Child Sexual Abuse Regulation would require platforms to detect and report both real and synthetic CSAM. Europol has also prioritised AI-enabled CSAM as a cross-border threat – indicating a commitment to tackle the issue head-on.
Meanwhile, the United States has been less proactive, lacking a comprehensive legislative scheme. Though some states, such as California and New York, criminalise deepfakes of children, outdated legislation in other states fails to address newer forms of CSAM. Civil remedies are also seldom adequate, leaving significant gaps in accountability.
Legal and Ethical Challenges
While there appears to be a universal consensus that AI-generated CSAM constitutes abuse and should be treated as such, several legal and ethical challenges still complicate the process of lawmaking and enforcement.
Evidentiary challenges – Prosecutors must demonstrate that an image is AI-generated rather than depicting a real child, a task that requires advanced forensic tools and expertise.
Proportionality – Should cartoon or stylised AI images attract the same penalties as deepfakes of real children? Courts have generally leaned toward a broad interpretation, but debate continues.
Developer and platform liability – Should AI developers face criminal responsibility if their tools are used for CSAM? How far should platform obligations extend?
Privacy and surveillance – Detecting synthetic CSAM may involve scanning personal devices or encrypted communications, raising human rights concerns.
Addressing these issues requires a careful balance of protecting children while avoiding overreach that stifles continued innovation or civil liberties.
The Way Forward
Reports from organisations such as WeProtect and the Internet Watch Foundation advocate for a principled approach to AI-generated CSAM that combines:
- Modernised statutory definitions explicitly include AI-generated material.
- Targeted liability for developers and distributors where intent or recklessness is present.
- Platform responsibilities include detecting and removing content promptly, with transparency obligations.
- Resourced law enforcement with access to digital forensics and cross-border cooperation.
- Public awareness campaigns, particularly in schools, should be run about the risks of AI-generated sexualised images.
Adopting these measures is crucial so that the law evolves at the pace of technology while protecting the rights and safety of children.
Overall, AI-generated child sexual abuse material represents a disturbing evolution of an existing problem. While some images may involve no real child at the point of creation, the harms are very real – reputational damage, exploitation, and the continued sexualisation of children.
Although Australia’s laws are broad enough to capture much of the conduct, there are still apparent gaps – which will only grow as AI continues to advance. Consequently, legal frameworks must evolve with the times, while also striking a balance between enforcement and privacy, innovation, and proportionality.