The Legal Test Ahead for Australia’s Under-16 Social Media Ban
Australia has taken a bold step into uncharted legal territory. From 10 December 2025, social media companies will be required by law to block users under 16 from creating or maintaining accounts. The rationale is clear – to shield children from the avalanche of online harms that have come to define the digital age. Amongst these harms is online child sexual exploitation – a particularly prescient concern given the unprecedented rise of end-to-end encryption, artificial intelligence and dark-web anonymity. How well the law will protect against it remains to be seen; however, critics lament that its political promises are subsumed by a far darker and more complex reality.
The Shifting Landscape of Online Abuse
The Criminal Code Act 1995 (Cth) makes it an offence to possess, produce, or distribute CSAM, punishable by up to 15 years’ imprisonment (s 474.24). Yet despite severe penalties, the supply of material continues to rise.
Law-enforcement data reveals that grooming now begins in plain sight — within messaging features of Instagram, Snapchat, gaming apps, and livestreaming sites. Predators use algorithms designed for engagement to locate minors, establish contact, and lure them into private chats or image exchanges. In one AFP case, an offender used more than 20 different social-media accounts to contact hundreds of children under 13 within weeks.
The Numbers
In this context, Australia’s eSafety Commissioner, Julie Inman Grant, has long described online child exploitation as the shadow pandemic and for good reason.
The statistics around online child sexual exploitation are sobering and demonstrate why governments around the world are scrambling to take decisive action.
In 2024–25, Australia’s Australian Centre to Counter Child Exploitation (ACCCE) received 82,764 reports of online child sexual exploitation — up 41 per cent from the year before. That’s more than 220 reports every day. In fact, since ACCCE’s creation in 2018, total reports have more than doubled.
According to data compiled by Bravehearts, around 7.6 per cent of Australian children say a sexual image of them has been shared without consent. A further 17.7 per cent report being sexually solicited online by an adult.
Globally, researchers at UNSW estimate that over 300 million children experience online sexual abuse or exploitation each year — roughly one in eight children worldwide.
What the New Law Actually Does
In November 2024, Parliament passed the Online Safety Amendment (Social Media Minimum Age) Act 2024, amending the Online Safety Act 2021.
The law requires platforms to take “reasonable steps” to stop anyone under 16 from using their services. Failure to comply can attract civil penalties of up to A$49.5 million or 10 per cent of annual turnover, whichever is greater.
Key points:
- Commencement: 10 December 2025.
- Responsibility: falls on social-media providers, not parents or children.
- Scope: applies to any service allowing user-generated posts, public interaction, and networking features.
- Enforcement: overseen by the eSafety Commissioner, who will issue compliance guidelines and conduct audits.
- Verification: the Government is testing “age-assurance” technology that can estimate or verify a user’s age through metadata, facial analysis, or third-party verification.
Put simply, social-media giants like Meta, TikTok, and X will soon have to prove that they can keep out underage users without breaching privacy laws or alienating legitimate ones.
The Argument For
Supporters argue that reducing under-16s’ exposure to social media cuts off key entry points for predators.
- Grooming and Direct Contact – Many exploitative relationships start in DMs or friend requests. Restricting under-16 access theoretically shrinks the pool of potential targets.
- Sexualised Content Exposure – Algorithms often feed young users inappropriate or exploitative material. Removing children from those ecosystems might reduce accidental or coercive exposure.
- Self-generated Exploitation – Some cases involve minors pressured or tricked into producing sexual content themselves. Restricting access could lessen that risk window.
The Age-Verification Dilemma
As stated, under the new Act, social-media companies must verify a user’s age using “reasonable steps.” Yet, the law deliberately avoids defining what those steps are, reflecting the deeper tension between protecting children and protecting privacy.
Age-verification systems typically rely on identity documents, credit data, or biometric analysis — methods that raise serious privacy concerns under the Privacy Act 1988 (Cth). Mandatory ID uploads would require vast databases of minors’ personal information, effectively creating new security risks. Even facial-age estimation technology, touted as a privacy-friendly alternative, carries a high error rate across different ethnic and gender groups.
Tech companies have warned that the law may be “extremely difficult to enforce.” Google and Meta note that age verification at scale would require collecting data from hundreds of millions of accounts worldwide.
Meanwhile, the predators that the law seeks to stop are already moving elsewhere to encrypted messaging services, private servers, and the dark web, far beyond the reach of age filters.
The Reality of CSAM Enforcement
Even with stricter laws, the practical challenge is detection. The eSafety Commissioner’s Takedown and Removal Notice Scheme operates reactively. While investigators can order content removed once it is found, they cannot prevent its creation or rapid replication.
The AFP’s Joint Anti-Child Exploitation Teams and international partners such as Europol and Interpol are now using AI tools to identify known victims and hash-match previously detected images. Yet CSAM is evolving with offenders increasingly generating synthetic abuse imagery using generative-AI models trained on real children’s photos. This new category of “virtual CSAM” falls into legal grey zones, forcing courts to interpret whether digitally created content depicting non-existent children still constitutes an offence.
Australia’s existing laws, written well before the age of deepfakes and generative AI, struggle to keep up. It doesn’t help that the Online Safety Amendment Act 2024 (Cth) does not explicitly address AI-generated abuse material.
Symbolic Law For a Substantive Problem
The political appeal of the social-media age ban lies in its simplicity. It offers a visible response to parents’ fears at a time when online child exploitation feels uncontrollable. But as experts repeatedly warn, age limits do not equal safety.
In a 2025 submission to Parliament, the Australian Human Rights Commission cautioned that banning under-16s may create unintended harm by excluding vulnerable teens who rely on online communities for mental-health or identity support. Others worry that cutting off legitimate access will drive youth engagement underground, onto less regulated platforms where predators roam unchecked.
Beyond this, it is essential to recognise that the issue of online child sexual exploitation is structural. It is not a single offence, but a broad ecosystem. It depends on technological infrastructure, financial incentives, and cross-border networks that cannot be dismantled by domestic legislation alone.
Creating Real Accountability
To make the social-media age law meaningful, Australia must pair it with genuine enforcement capacity. That means:
- Mandatory CSAM-detection standards for all major platforms, using AI hash-matching tools such as Microsoft’s PhotoDNA.
- Expanded cooperation treaties allowing the AFP to compel data from offshore companies.
- Increased funding for the eSafety Commissioner’s specialist CSAM unit to handle the growing case backlog.
- Legislative clarity on synthetic and AI-generated abuse material.
- And above all, survivor-centred frameworks ensuring that those whose images are circulated online receive timely, trauma-informed support.
Internationally, the EU’s proposed Regulation to Prevent and Combat Child Sexual Abuse goes much further, requiring proactive scanning of encrypted content. Australia has been hesitant to follow, fearing privacy backlash. Yet without similar tools, our enforcement agencies remain reactive rather than preventative.
The Legal Implications for Platforms
The Online Safety Amendment Act may also foreshadow a new era of civil accountability for social-media companies. By codifying explicit obligations to restrict underage use and respond to harmful material, the legislation could strengthen arguments that platforms owe users a duty of care.
In future litigation, survivors of online exploitation may allege negligence or breach of statutory duty if a platform fails to act on known risks. Comparable claims are already emerging in the United States and the United Kingdom, where parents of victims have sued Meta and TikTok for design features that allegedly facilitated grooming.
Australia’s amendment, while not directly creating private rights of action, lays the groundwork for such claims. If companies are expected to prevent underage access, failing to do so could one day translate into liability
Looking Ahead
Whether this policy succeeds or stumbles, it will set a precedent that will be watched worldwide.
If Australia’s framework reduces online exploitation without over-surveillance, it could become a model for other democracies. If it fails, it will serve as a cautionary tale about overregulation and technological optimism.