Australia Confronts 41% Surge in Child Exploitation Reports
For years, the global community has debated how far the law should go in holding technology giants accountable for the risks their platforms pose to children. And as the means of exploitation only continue to become more sophisticated, so does that debate – and everything around it.
In the past year alone, the Australian Centre to Counter Child Exploitation (ACCCE) recorded a 41 per cent surge in reports of online child sexual exploitation.
With grooming scandals on Roblox, the explosion of deepfake pornography, rising youth self-harm linked to algorithmic design, and a looming national ban on social media for under-16s, governments around the world are coming under increasing pressure to act.
Within this context, a pertinent question arises – do digital platforms owe their users, especially children, a duty of care?
Increasingly, that answer is yes. Around the world, momentum for a digital duty of care is steadily building. Yet Australia has been tentative in its approach. Although the government announced its decision to develop a Digital Duty of Care regulatory model late last year, it has yet to come into effect. Now Australia finds itself at a crucial juncture – take the action the scale of the problem demands, or risk falling behind global standards and leaving its children out to dry.
A Digital Explosion
According to new figures, the ACCCE received 82,764 OCSE reports across the 2024/25 financial year, primarily from the United States National Centre for Missing and Exploited Children, as well as members of the public and government agencies, such as the eSafety Commissioner.
Compared to the 58,503 reports in 2023/24, 40,232 in 2022/23, and 36,600 in 2021/22, the number is staggering.
This surge is not anomalous, however – it is the predictable result of technological shifts, fragmented regulation, and the widening gap between legislative ambition and practical enforcement. More than that, it is indicative of the sheer volume and sophistication of child-exploitation material circulating online.
In a world of generative AI, cloud-based storage, and end-to-end encryption, it has never been easier for predators to prey on children.
Unlike abuse of the past, today’s exploitation is:
- Instant, unfolding in real time through livestreamed abuse.
- Borderless, involving offenders, servers, and victims across multiple jurisdictions.
- Industrialised, with grooming, extortion and image-trading occurring at scale.
- Technologically advanced, fueled by sophisticated deepfake tools that can create sexualized images of children.
In this regard, the 41percent surge is not merely evidence of more reports – it is evidence that a more sophisticated, accessible, and profitable ecosystem of abuse is taking shape. Consequently, it cannot be treated as a temporary spike. It is a structural issue that necessitates structural reform.
The Need for A Digital Duty of Care
In this context, now seems a prescient moment to revisit the rationale for a digital duty of care.
Within the digital environment, it is the platforms that design the algorithms that push harmful content, the pathways that enable predators to contact children, the addictive loops keeping teens scrolling for hours on end, and the opaque moderation systems that routinely fail abuse survivors.
Though tech companies have actual control over these systemic risks – unlike manufacturers, financial service providers, or even shopping centres – they owe no explicit duty of care to prevent foreseeable harm to users.
Under common law negligence principles, courts have been hesitant to impose such a duty. They view platforms as facilitators, not custodians of public safety. Without legislative intervention, victims harmed by online exploitation, self-harm content, bullying, grooming or deepfake abuse are left with little in terms of avenues for redress.
This is precisely where a digital duty of care would intervene.
What a Duty of Care Would Look Like
A digital duty of care would place a statutory obligation on platforms to take reasonable and proportionate steps to prevent foreseeable harm within their services. It does not mandate censorship or convert tech companies into moral arbiters.
Instead, it obliges them to:
- Assess risks.
- Design systems safely.
- Implement effective age-assurance.
- Act quickly when harm emerges,
- And document what they are doing to protect users – particularly children.
The duty is preventative, not punitive. It regulates platform behaviour, not speech. And just as workplaces must adopt systems reasonably calculated to prevent harm, platforms should be required to operate digital environments that are not foreseeably dangerous.
If Australia were to legislate a digital duty of care, it would not be acting alone. In fact, the global trend is now decisively moving towards codifying duties of care for digital platforms.
In the EU, a duty of care has specifically been legislated through the EU Digital Services Act. It places varying obligations on categories of services, platforms, and providers to target illegal content, disinformation and transparent advertising.
For intermediary services, the legislation imposes an obligation to comply with orders to remove illegal content and to publish annual reports on their content removal and moderation activities. For huge online platforms and search engines, it creates obligations to install internal complaint handling systems regarding the removal of content, comply with enhanced transparency obligations, conduct an annual risk assessment, establish an independent compliance function and provide additional information and user optionality in relation to online advertising and recommender systems used on their platforms.
The UK Online Safety Act 2023 creates a similar duty of care of online platforms. It requires regulated services to conduct certain risk assessments at defined intervals, carry out risk assessments of whether the service is likely to be accessed by children and how likely children are to be harmed by content on the website and take action against illegal or harmful content from their users.
Where to Next
Although the Albanese government has previously expressed its intention to develop and legislate a digital duty of care, little progress has been made insofar as draft legislation to bring that plan to fruition.
If it were to be legislated eventually, it would place legal obligations on digital platforms to take reasonable steps to protect Australian users from foreseeable harm, with the risk of heavy penalties for systemic failures, in an effort to “deliver a more systemic and preventative approach to making online services safer and healthier”.
Initially, the Duty of Care was recommended by an independent review of the Online Safety Act. It also comes after several other reforms directly targeting the regulation of online safety harms pertaining to digital platforms, including the social media ban for under 16s, the misinformation bill, and amendments to the Criminal Code Act 1995 through newly introduced deepfake legislation.
Regardless, a 41% rise in online child-exploitation reports is not just a headline – it is a call to action. Technology has clearly outpaced existing child-protection frameworks, and without decisive, coordinated legal reform, including a digital duty of care, Australia will continue to see exponential increases in harm.
