A Tobacco Moment for Social media? The Litigation Reshaping Liability
Social media companies occupy a peripheral position within the law. The litigation they face is ambitious, often novel, and ultimately constrained by an outdated legal architecture that treats them as passive intermediaries. For the most part, they have been shielded from liability, despite their inexorable capacity for harm. That posture, however, is beginning to fracture.
Just last week a New Mexico Court ordered Meta to pay US$375 million (approximately AUD$544 million) after a jury found that its platforms had facilitated environments in which child sexual exploitation could proliferate, in wilful breach of the state’s consumer protection laws.
And now, almost contemporaneously, a California jury in K.G.M v Meta et al. has found Meta and Google liable for designing platforms said to foster compulsive use and psychological harm.
It is difficult to dismiss these judgments as coincidence. They sit within a broader wave of proceedings – brought by individuals, school districts, and state attorneys-general – that are beginning to test the invincibility of social media platforms. Taken together, they confront a proposition once thought to be implausible – that social media platforms may be liable not merely for what they host, but for how they are built.
The Judgment
One would be forgiven for dismissing the significance of the K.G.M judgment on first impression. The damages awarded are modest by the standards of contemporary litigation. But to focus purely on quantum would be to miss the point. More pertinent here is the claim’s substantive value.
The lead plaintiff, now 20 years old, claimed that from an early age she was exposed to platforms engineered around infinite scroll, autoplay, and algorithmic recommendation systems calibrated to maximise engagement. Over time, that architecture produced compulsive patterns of use, contributing – so it was argued – to anxiety, depression, and body dysmorphia.
Following testimony from Mark Zuckerberg, co-founder and CEO of Meta, and Adam Mosseri, CEO of Instagram, the jury – after a seven-week trial – handed down a verdict in favour of the plaintiff against Meta and Google. It was found that both companies were negligent in the design of their apps. Notably, TikTok and Snapchat had been named as defendants originally but settled before the trial began. Compensatory damages of US$3 million (approx. AUD$4.35 million) were awarded, plus a further US$3 million (approx. AUD$4.35 million) in punitive damages – a total of US$6 million (approx. AUD$8.7 million), split so that Meta bears 70% (US$4.2 million / approx. AUD$6.1 million) and Google 30% (US$1.8 million / approx. AUD$2.6 million).
For decades, liability has only been situated within the context of content moderation. Under this framework, platform liability would only arise where content was unlawful, or where there was a failure to remove unlawful content. On this terrain, social media companies have enjoyed their greatest protection – a ‘safe harbour’ substantially reinforced by Section 230 of the Communications Decency Act of 1996 – which casts platforms in the role of intermediary rather than publisher.
But in K.G.M, the jury accepted that the inherent design of these platforms – along with their defining features – could constitute a source of legally recognisable harm. This reframing of liability is subtle but highly consequential. It suggests that while Section 230 may protect what a platform publishes, it does not necessarily protect how that platform functions – at least at first instance, pending appeal.
Indeed, this shift has a familiar quality, particularly when framed against the ‘tobacco moment’ of decades ago. While the comparisons were once deployed quite loosely, they are beginning to land with greater precision. Like cigarette manufacturers before them, social media companies are now being confronted with claims that they understood the risks inherent in their products, continued to refine those products in ways that amplified those risks, and did so in pursuit of sustained user engagement. Internal research, surfacing in various proceedings, has only sharpened that narrative.
Important to note is that K.G.M was selected as a bellwether from a much larger pool of claims – over 1,600 plaintiffs in the California coordinated proceeding alone, and more than 10,000 individual cases across the United States. Its reasoning will inevitably be tested, refined, and contested on appeal. In the meantime, it provides an important template – demonstrating, at least at first instance, that courts are willing to entertain a theory of liability grounded not in publication, but in design.
The Australian Perspective
From an Australian perspective, the immediate question following K.G.M is whether its underlying logic can be translated into a domestic context.
There are, of course, structural differences. Australia does not operate under a Section 230-style immunity, and platform liability has developed through a more fragmented interplay of defamation law, statutory regulation, and incremental judicial reasoning. But those differences may prove less of a barrier than might first appear.
In fact, Australian law may be unusually well-positioned to accommodate the shift. The idea that liability can arise from design is not entirely novel within Australian negligence law. The more difficult question – whether a digital platform can be characterised as a product or system capable of negligent design – is precisely the conceptual move that K.G.M begins to normalise.
More interesting still is the potential intersection with existing precedent. In Prince Alfred College v ADC [2016] HCA 37, the High Court articulated the circumstances in which an employer can be held vicariously liable for the criminal acts of an employee – focusing on whether the role assigned to the wrongdoer provided the occasion (rather than merely the opportunity) for harm, particularly where that role placed the employee in a position of authority or intimacy over a vulnerable person. And more recently, in AA v The Trustees of the Roman Catholic Church for the Diocese of Maitland-Newcastle [2026] HCA 2, the High Court went further still – overturning long-standing precedent to hold that a non-delegable duty of care may be breached by intentional criminal conduct on the part of a delegate, provided the institution had assumed responsibility for the safety of the person harmed and the risk was reasonably foreseeable.
Though the analogy is not perfect, it is surprisingly close. Social media platforms, particularly in their engagement with children and adolescents, exhibit many of the same features as traditional institutions – asymmetries of power, curated environments, and risks that are not external to the system, but produced by it. The AA decision, handed down only in February 2026, is particularly significant: it confirms that where an institution places a person in a position of authority over a vulnerable individual and assumes responsibility for their safety, the criminal or intentional nature of the resulting harm does not immunise the institution from liability. If those parallels are accepted in a digital context, the conceptual proximity between institutional liability and platform liability begins to increase substantially.
Against this backdrop, an increasingly assertive regulatory environment has already taken shape. Beyond the Online Safety Act 2021 (Cth) and the expanding role of the eSafety Commissioner, the Online Safety Amendment (Social Media Minimum Age) Act 2024 (Cth) – which came into force in December 2025 – now requires social media platforms to take reasonable steps to prevent Australians under 16 from holding accounts, with civil penalties of up to AUD$49.5 million for systemic non-compliance. Australia has already signalled a clear willingness to intervene in digital ecosystems. Cases such as K.G.M only serve to demonstrate that courts may become an equally important site of that intervention.
Looking Ahead
Granted, the legal reach of K.G.M is not absolute. It does not resolve the complex questions that will inevitably arise around causation, scope of duty, or the limits of platform responsibility. Nor does it displace the central role of legislatures in shaping digital regulation.
What it does do, however, is perhaps more important – it reframes the initial posture. The question is no longer confined to whether platforms should be responsible for the content they host; it is whether they should be responsible for the systems they design. If the trajectory of recent litigation is any indication, this is a question courts are becoming increasingly willing to confront.
