Australia’s Social Media Ban: A Proportionate Response or Dangerous Overstep?
As the Labour government embarks on an ambitious quest to ban social media for users under the age of 16, key questions around its implementation and practical application remain unanswered.
While it is proposed that the ban is needed to protect the mental health and wellbeing of children and teens, the overwhelming public consensus seems to be of one of cautious intrigue.
In the past week, the fervence around the ban has kicked back into gear after Anthony Albanese backfliped on plans to provide an exemption to YouTube, following a recommendation from eSafety Commissioner Julie Inman Grant.
With that in mind, its important to elucidate just what’s in store for young Aunstralians, especially since the changes commence in the coming months.
Understanding the Changes
As a response to growing conerns around the impact of social media on young Australians the goverment passed the Online Safety Amendment (Social Media Minimum Age) Act 2024 (‘the Act’), ammending the Online Safety Act 2021.
Being the first of its kind – the landmark legislation raises the minimum age for creating social media accounts from 13 to 16 in attemp to protect children online.
It applies to services that allow users to interact, post, and share content, such as Instagram, Facebook, Snapchat, X, and TikTok, and requires them to take ’reasonable steps’ to prevent under-16s from creating accounts, with penalties of up to $49.5 million for non-compliance.
A fact sheet published by the federal government in December last year said platforms would fall under the age restriction requirements if the platform:
- has a sole or significant purpose to enable online social interaction between two or more users
- allows users to link to, or interact with, some or all of the other users
- allows users to post content
The key question arising from the changes enactment is how platforms will verify users ages.
Unfortunately – as of yet – no clarification has been provided by the goverment on the matter, despite the fact the restrictions are expected to come into force by December 2025.
While social media platforms may make requests that users provide their goverment ID to verify their ages, it is not a requirement for access.Instead, they need to offer “reasonable alternatives” for users to prove they are 16 or older.
And though trials of age-checking technology have been conducted, uncertainty over their effectiviness have casted doubted over the bans viability.
For example, face-scanning technology tested on school students this year could only guess their age within an 18-month range in 85 per cent of cases.
The Rationale
The changes come at a time where cultural phenomena such as Netflix’s Adolescence have infiltrated the public consciousness – elucidating the complex impact of social media on young people in 2025.
In Australia, 84% of Australian children aged 8-12 have reportedly used at least one social media platform, with only 13% correctly identified by platforms for being under their minimum age requirement of 13.
Further, there is an increasing amount of evidence linking social media use by adolescents with mental illness.
In light of this, the government argues that the ban is a necessary step to protect youth mental health and limit exposure to online harms—such as cyberbullying, harmful or addictive content, disinformation, self-harm, and body image issues.
Speaking to the media, Prime Minister Anthony Albanese claimed that;
“Our Government is making it clear – we stand on the side of families.
Social media has a social responsibility and there is no doubt that Australian kids are being negatively impacted by online platforms so I’m calling time on it.
Social media is doing social harm to our children, and I want Australian parents to know that we have their backs”
The Legal Implications
Whilst the ban will undoubtedly be used as a fan to fuel an endless media frenzy, it is more productive to analyse it through the lens of the law, especially as platforms such as YouTube consider legal action.
In the legal context, the ban has significant implications across a broad spectrum of domains, elucidating novel risks that necessitate considered legal analysis.
The eSafety Commissioner even acknowledged that the legislation is “one of the most complex and novel pieces” the government has handled, likening its implementation to “building the plane while flying it.
Impact on Child Rights
Australia is signatory to several treaties including the Convention on the Rights of the Child and the International Covenant on Civil and Political Rights. Under them children are guarenteed certain rights such as freedom of expression, access to information, education, play and leisure, and protection from discrimination. While these rights are not absolute and can be limited, any restriction must serve the best interests of the child.
Though it is argued that the ban is a proportionate measure to protect children human rights advocacy groups such as the Australian Human Rights Commision think otherwise. They warn that a blanket ban undermines children’s rights and is a net negative on wellbeing – deeming it overly restrictive and disproportionate to address the risks posed by social media.
On this basis, there is a broad scope for the ban to be challenged under international law, where it could be argued that the legislation fails to strike the balance that Australia’s treaty obligations necessitate.
Freedom of Expression
While Australia does not have a formal bill of rights, the High Court has recognised an implied constitutional freedom of political communication.
A blanket ban on under-16s accessing major platforms may unintentionally limit young people’s ability to participate in political discourse, particularly during key periods like elections or referendums.
Considering that social media can be used as a valuable conduit for speech and advocacy, the ban may be susceptible to legal challenge.
However, considering the legislation’s dominant purpose is the protection of child safety, it is likely proportionate, and therefore immune from challenges to its constitutional validity unless it can be demonstrated to be manifestly excessive.
Discrimination
One of the most dangerous aspects of the ban is its potential to further disenfranchise vulnerable groups. Already, UNICEF have expressed concern on how it may affect ulnerable youth who rely on social media for support networks.
Under Australia’s Disability Discrimination Act 1992 and Racial Discrimination Act 1975 policies that have a disproportionate impact on vulnerable groups, even if not intentionally discriminatory are prohibited.
On this basis, the validity of the legislation may be challenged if it is found to disproportionately harm at-risk groups such as LGBTQ+ teens in rural areas, young people experiencing domestic violence who look to online communities for safe spaces.
Affected individuals may argue that the legislation is indirectly discriminating by alienating them from important sources of information and community.
“Reasonable Steps
The lack of clarity in the drafting of the legislation also leaves its provisions open to interpretation, complicating the process by which social media platforms may enforce the ban.
In particular, the requirement that “reasonable steps” are taken to ensure users are over 16 is shrouded in ambiguity. It is uncertain what constitutes “reasonable steps”, and further, what qualifies as being “reasonable” may vary across platforms.
For the ban to be effectively enforced, there is a need for further regulatory action to give the provisions a clearer and more specific characterisation.
Privacy Concerns
To comply with the law platforms must verify the age of users. Under s63DB of the Act. this requires them to provide “alternative age assurance methods” for account holders to confirm their age, rather than relying only on government-issued ID.
However, the large scale data-collection associated that the implementation of new age-assurance technologies would require, raises significant privacy and data concerns. Pursuant to s63F of the Act any information collected for the purposes of age checks will need to be destroyed to avoid breach of s13 of the Privacy Act 1988.
Despite this, the process for widespread collection and storage of data before its destruction still raises significant privacy concerns. If data is leaked, misused or inadequately secured platforms may incur liability under privacy laws.
Cross-Border Enforcement
The majority of social media platforms are headquarted in the U.S. or Europe which means that the laws will have to be enforced extraterritorialy and may potentially conflict with foreign privacy laws such as The General Data Protection Regulation (GDPR).
In addition, the transnational nature of the internet makes it far easier to circumvent the ban, especially through the use of Virtual Private Network (VPN) technologies.
Already in the United Kingdom, the use of VPN and other circumvention tools have spiked significantly after it introduced similar reforms through its Online Safety Act.
Proportionality
Ultimately, the overarching concern that arises over the changes is whether or not the ban is a proportional measure to effectively protect young users from online harm.
Although the need to better protect children and young people online is plain and clear, it is uncertain whether a blanket ban is the right response. There is an argument to be made that the Goverment is going beyond the best inrerests of the child by limiting their access to valuable online spaces and information.
This argument is strengthened by the fact that less restrictive measures could be introduced without imposing such a broad restriction on the rights of the child.
For instance, the Australian Human Rights Commision has suggested placing a legal duty of care on social media companies. A statutory duty of care would require social media platforms to take reasonable steps to make their products safe for children and young people, introducing a more proactive way of increasing accountability.
Beyond statutory reforms, there are calls to help children and young people to better navigate online spaces through education. This would necessitate an increased focus on digital literacy and online safety within the national curriculum so that young people are taught to think critically about what they see online and how they engage with social media. There is also a need to equip parents and teachers with the tools and resources that are required to provide appropriate guidance and support to children.
Looking Ahead
As Norway plans a similar ban for under-15s, Australia’s unique approach seems poised to reshape attitudes globally. Despite this and the novelty around it, the changes are underscored by inherent complexities that raise real questions about the rights of young Australians.
Have questions about the ban or how it may impact your children? Feel free to call us on +612 9283 5599 or complete the free and confidential call-back form below. We’re here to help.