Criminalising Deepfakes: Lessons from the UK for Australian Law Reform
Elon Musk found himself entangled in another controversy last week, after his social media platform X was the subject of a deepfake nudes scandal.
The controversy centred around the platform’s AI Grok tool, which, over a two-week period before modification, had enabled paid users to create sexually explicit images using the faces of real people. Public outcry was especially fervent given the ease at with which users could create sexualised images of children and women, and share them via Grok and in X.
Now, in the wake of formal investigations into X, the UK government has moved decisively to criminalise the creation of non-consensual images using AI, bringing forward provisions of the Data (Use and Access) Act that were previously expected to come into force more gradually. From this week, creating or requesting the creation of AI-generated intimate images without consent is a criminal offence, alongside existing offences relating to sharing or threatening share such material.
As a means to regulate AI at its source, the government has also moved to criminalise nudification apps through the Crime and Policing Bill, making it illegal for companies to supply tools designed to generate non-consensual intimate images.
Together, the reforms also represent a significant shift in how the law conceptualises harm, consent, and responsibility in the age of generative technology, offering a model that Australian lawmakers and practitioners are increasingly being asked to consider.
Until recently, the UK’s legal framework focused largely on distribution. Laws introduced to combat so-called “revenge porn” made it an offence to share or threaten to share intimate images without consent. Those provisions were later extended to cover digitally altered images, including deepfakes. But the actual generation of artificial sexual images still fell outside the scope of criminal liability, meaning that individuals could still legally create sexualised images, as long as they weren’t distributed.
For deepfake victims, this technology is profound. The knowledge that a realistic sexual image of oneself exists, without consent, is extremely violating, regardless of whether it is shared or not. In this sense, the reforms reframe deepfake abuse as a matter of sexual autonomy and dignity, not merely reputational harm.
Under the new framework, a person commits an offence if they intentionally generate an AI-created sexual image of another identifiable person without that person’s consent. The law also captures those who commission, request, or pay for such images to be created. This is particularly pertinent, as it recognises the realities of the digital marketplaces, where users can outsource the generation of deepfake content to third parties or specialised services. Consequently, offenders are no longer able to evade criminal liability by claiming they did not personally create the content but merely obtained it from someone else.
The reforms also ramp up punitive measures. Depending on the circumstances, offenders may face custodial sentences, substantial fines, and even placement on the sex offenders’ register. This signals clear legislative intent that deepfake abuse is no longer a minor offence, but rather a tangible form of sexual abuse.
It’s not just the substantive framework of the reforms that is interesting, but also the conceptual framework it’s predicated on. Traditionally, criminal law is reluctant to intervene in conduct that does not produce immediate, tangible harm. The creation of an image stored privately, without dissemination, would ordinarily fall outside the law’s purview.
The UK’s approach challenges that assumption. It accepts that the creation of sexual images non-consensually is harmful, regardless of whether it is shared. In doing so, it aligns digital abuse with existing sex offences, where the violation lies in the absence of consent, not merely in subsequent distribution or publication.
Reframing the law in this way has broader implications. If the law recognises that digital abuse violates autonomy in the same way as physical abuse, it will force the courts to consider novel questions that are becoming increasingly complex in a digitally mediated context.
It is also important to note that the reforms do not focus solely on individual offenders. They form part of a broader regulatory environment, namely the UK’s Online Safety Act, that places heightened duties on platforms and technology providers.
Rather than relying purely on notice-and-takedown systems, regulators are now pursuing a policy of proactive risk mitigation. In practise, this means expecting companies that host or develop generative AI tools to implement safeguards such as technical restrictions, detection systems, clear reporting mechanisms and internal governance frameworks.
This represents a significant shift away from the traditional model of intermediary liability, in which platforms were largely shielded from liability provided they responded appropriately upon being notified of unlawful content. In this sense, the reforms impose a proactive duty on companies and platforms to prevent foreseeable harm.
For technology companies, this makes it far more complicated to obfuscate liability when harm occurs. For the lawyers advising them, it raises new, novel questions about what constitutes “reasonable” preventative measures, how risk should be assessed, and how liability might attach if safeguards fail.
Outside the UK, the reforms offer a case study in how the law might evolve in other jurisdictions, including Australia. They raise questions that lawmakers and courts will likely confront in the coming years around the legality of sexualised deepfake imagery and the duties of operating platforms and technology companies.
Regardless of whether the UK approach is effective, or whether a similar approach is adopted in Australia, there is clearly a growing recognition that abuse occurring in virtual spaces, and facilitated by technology, is equally as damaging and violative as physical sexual abuse.
