Skip to main content

AI Deepfake Laws Explained: US and Australian Legal Responses

What Are Deepfakes and Why Are They a Growing Threat?

In previous years, the buzzword fake news had rapidly spread across the internet, social media, and news outlets. The term referenced fake information intended to mislead and influence the public’s opinion through its conventional news style.

As a result, the public was encouraged to approach the news critically by fact-checking or verifying it with reliable sources, or risk disengaging from it altogether. However, the recent and rapid emergence of AI-generated deepfakes has quickly become an unprecedented obstacle, straining the public’s trust.

Deepfakes pose an unexplored form of false information, where extremely realistic photos, videos and audio are artificially created to look almost indistinguishable from reality.

The ease with which ordinary individuals can manipulate eerily convincing images has created widespread concern and distrust. Past viral incidents have involved false images of Donald Trump being arrested and explicit images of Taylor Swift.

Of further concern, females are disproportionately victims of sexually explicit deepfake images.

The potential for significant mental, emotional and physical harm evidently calls for legal intervention.

Deepfake Legislation in the United States: The Take It Down Act

How the Take It Down Act Seeks to Protect Victims

On 19th May, Donald Trump signed the Take it Down Act, aiming to end the exploitation of deepfakes on the internet. Prior to the enactment of the Bill, each state had some form of protective law for non-consensual intimate imagery.

However, different approaches were taken to address deepfakes specifically.

For example, some states claimed that their existing laws were sufficient, and no changes were needed to address deepfakes. On the other hand, over 20 states either amended their existing legislation or enacted new laws to explicitly include deepfakes.

As a result, penalties and criminal prosecution were inconsistent across the US. Civil actions were also expensive, time-consuming, and often led to victims reliving their trauma. Hence, uniformly applied legislation directly targeting deepfakes was needed, resulting in the Take It Down Act.

Key Provisions of the Take It Down Act

The Take It Down Act protects and empowers deepfake victims in 3 main ways:

  1. Criminalisation of Deepfake Distribution: It is a criminal offence to knowingly publish, or even threaten to publish, digital forgeries of an “intimate visual depiction” involving adults or minors.
  2. Exceptions for Legitimate Use: Disclosure is allowed if made reasonably and in good faith, such as for law enforcement, medical, or educational purposes.
  3. Takedown Mechanism: Victims can request online platforms to remove non-consensual deepfake content within 48 hours, including known copies.

Free Speech Concerns and Criticism

However, the Act has drawn criticism from First Amendment lawyers, including those from the Cyber Civil Rights Initiative, Electronic Frontier Foundation, and Center for Democracy and Technology. Critics argue that the takedown provisions are too broad, potentially harming free speech and opening the door to misuse.

Australia’s Legal Response to Deepfake Technology

Federal Laws: New Criminal Offences for Deepfake Abuse

In August 2024, the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 was passed to address online harm from sexually explicit deepfakes.

  • Section 474.17A: Criminalises non-consensual transmission of deepfake materials of adults, using any carriage service (social media, internet, etc.), with a penalty of up to 6 years’ imprisonment.
  • Section 474.17AA: Targets creators of deepfake content, imposing an aggravated offence punishable by up to 7 years’ imprisonment.

NSW Legislation: Gaps and Opportunities for Reform

NSW currently addresses intimate image abuse under Crimes Act 1900 Division 15C, which includes:

  • s 91P–91R: Offences for recording, distributing, or threatening to share intimate images without consent.
  • s 91H(2): Covers child abuse material but does not explicitly mention AI or deepfakes.

However, the lack of reference to internet or AI technologies in key definitions shows NSW law may be outdated. Limited case law also makes it unclear whether deepfake child abuse material is adequately addressed.

Victoria and South Australia: Leading the Way with Deepfake-Specific Laws

Victoria’s Justice Legislation Amendment (Sexual Offences and Other Matters) Act 2022 (Vic) explicitly expanded the definition of “intimate image” to include digitally created, manipulated or altered images resembling the victim. This forward-thinking move does not rely on carriage services and directly targets visual deepfakes, although audio/text deepfakes are still excluded.

South Australia followed suit in October 2024 with the Summary Offences (Artificially Generated Content) Amendment Bill 2024, criminalising the creation or distribution of humiliating, degrading, or invasive deepfake content.

Will NSW Update Its Deepfake Laws?

As deepfake technologies continue to evolve rapidly, NSW may need to modernise its legislation to remain effective. Aligning with federal standards and other state approaches, like those in Victoria and South Australia, could ensure better protection for victims of digital abuse.


If you or someone you know has been affected by deepfake content, call us on +612 9283 5599 or complete the free and confidential call-back form below.

Leave a Reply

Your email address will not be published. Required fields are marked *

Request a free consultation