Skip to main content

Grok, X and AI Generated Sexual Abuse

Elon Musk is at the centre of yet another controversy, after reports that X is facilitating the rapid spread of AI-generated nonconsensual sexual images, also known as deepfakes.

In recent weeks, users on X have taken to a new trend, where they prompt X’s Grok chatbot to convert images of people, primarily women, but also children, into naked or near-naked depictions of the same. These images are publicly visible on X and are being shared to harass, demean, or silence individuals.

According to reports, out of 20,000 images generated by Grok between December 25 and January 1, around 2 per cent appeared to be 18 or younger, including 30 of young or very young women or girls, in bikinis or transparent clothes.

The misuse of Grok is symptomatic of the current online climate, where generative AI is increasingly being exploited for more nefarious purposes. What once required specialist skills can now be done with widely available software and mobile apps, and with the barrier for content production significantly lowered, there has been a sharp increase in the circulation of AI-generated Child Sexual Abuse Material (CSAM) online. Within a single month, it was reported by the Internet Watch Foundation that over 20,000 AI-generated abuse images were shared on a dark web forum.

In this sense, the controversy around Grok brings pertinent questions sharply back into focus for lawmakers, regulators, and law enforcement: Can the law treat AI-generated material the same as traditional CSAM? Who should be held responsible? And how can legal frameworks balance child protection with legitimate uses of AI?

Before answering these questions, however, it is pertinent to look at the responses of regulators around the world, who have begun opening inquiries, demanding takedowns, and threatening legal action.

  • European Union: The European Commission has signalled potential enforcement action, stating that it is “very seriously looking into” the matter. A Commission spokesperson described the conduct as unlawful and wholly incompatible with European values, emphasising that the Digital Services Act (DSA) is applied rigorously and that all platforms operating in the EU are expected to comply with its obligations.
  • United Kingdom: UK regulators have moved quickly. Communications regulator Ofcom has demanded explanations from X regarding how Grok was able to generate non-consensual sexualised images, including images involving children, and whether the platform has failed to meet its statutory duty to protect users. Ofcom confirmed it has made urgent contact with both X and xAI and will conduct a swift assessment to determine whether a formal investigation is warranted. The Information Commissioner’s Office (ICO) has also contacted X and xAI to seek clarity on their compliance with UK data protection law and the safeguards in place to protect individuals’ rights.
  • France: French authorities have opened a criminal investigation into the dissemination of sexually explicit AI-generated deepfakes on X. A Paris prosecutors’ office confirmed the probe follows complaints lodged by French lawmakers concerning content produced by Grok.
  • Ireland: The media regulator, Coimisiún na Meán, and the law enforcement agency, An Garda Síochána, have been urged to take action. Coimisiún na Meán has indicated that it is engaging with the European Commission on the issue, reflecting the cross-border regulatory framework under EU law.
  • India: India’s Ministry of Electronics and Information Technology (MeITY) has issued a formal notice to X Corp’s chief compliance officer, alleging failures to comply with statutory due diligence obligations under the Information Technology Act 2000 and the Intermediary Guidelines and Digital Media Ethics Code Rules 2021. The Ministry has sought an “Action Taken Report” outlining immediate measures to prevent the generation and dissemination of obscene and sexually explicit content through AI tools such as Grok.
  • Malaysia: Malaysia’s Communications and Multimedia Commission has expressed “serious concern” in response to public complaints about the misuse of AI tools on X to create indecent and harmful manipulated images, including images involving women and minors. The regulator noted that such conduct may constitute an offence under Section 233 of the Communications and Multimedia Act 1998 and confirmed that investigations into alleged breaches are underway.
  • Brazil: In Brazil, federal deputy Erika Hilton has called for Grok to be suspended nationwide, alleging that the system has generated and distributed erotic images, including child sexual abuse material, without consent. She has referred the matter to both the Federal Public Prosecutor’s Office and the National Data Protection Authority, arguing that X should be disabled in Brazil pending full investigation.
  • United States: While no US regulator has announced a formal investigation, lawmakers have raised concerns. The Department of Justice has reiterated that it treats AI-generated child sexual abuse material with the utmost seriousness and will aggressively prosecute those who produce or possess such material, signalling increased enforcement focus in the area.
  • Australia: Australia’s eSafety Commissioner is investigating reports of non-consensual sexualised deepfake images generated by Grok. The regulator has confirmed that it has received multiple complaints since late 2025 and is assessing whether regulatory action is required. Commissioner Julie Inman-Grant has stated that eSafety will use its enforcement powers where necessary.

Legal Framework

As regulators across the world continue to monitor the situation, it is worth considering deepfake pornography through a legal lens, particularly when its production and distribution are facilitated by third-party platforms.

In Australia, there is no single standalone law governing deepfakes. Instead, harmful AI-generated content is regulated through a combination of criminal law, online safety regulation and privacy law, with liability turning on the nature of the content and the harm caused. Where a deepfake depicts a child in a sexualised manner, it is unequivocally illegal under the Criminal Code Act 1995 (Cth). The law captures material that “appears to depict” a child, meaning AI-generated or manipulated images fall squarely within existing child sexual abuse material offences, regardless of whether the child is real.

For adults, non-consensual sexualised deepfakes are addressed through state and territory image-based abuse offences and, at a federal level, the Online Safety Act 2021 (Cth). That Act empowers the eSafety Commissioner to order the rapid removal of non-consensual intimate images, including fabricated or altered images, and to take enforcement action against platforms that fail to comply.

Australia’s Criminal Code Amendment (Deepfake Sexual Material) Act 2024 strengthened the Criminal Code Act 1995 by introducing offences for transmitting sexual material relating to an adult without their consent. The offences capture both unaltered material and material created and altered using technology. A standalone offence for the non-consensual sharing of adult private sexual material carries a maximum prison sentence of six years.

Despite the statutory protections that exist, the sharing of deepfake pornography can be very difficult to prosecute. Deepfakes include metadata that link them to an IP address, which can be easily circumvented with a VPN, making it impossible to trace the person responsible. Many victims of deepfake porn do not know the perpetrator and cannot produce evidence of where the images came from. Furthermore, if the perpetrator was outside of Australia, they would not be subject to Australian laws.

Potential Reform

As a result of difficulties in prosecuting offences, regulatory attention has increasingly focused on platform responsibility and harm mitigation rather than on a unified deepfakes regime. The eSafety Commissioner has advocated stronger obligations on platforms to detect and remove synthetic CSAM, emphasising the need for proactive measures alongside criminal penalties. This suggests that Australia is moving toward a multi-layered approach that combines enforcement, platform regulation, and legislative clarity

In this context, it makes sense to explore the rationale for a digital duty of care. Within the digital environment, it is the platforms that design the algorithms that push harmful content, the pathways that enable predators to contact children, the addictive loops keeping teens scrolling for hours on end, and the opaque moderation systems that routinely fail abuse survivors.

Though tech companies have actual control over these systemic risks – unlike manufacturers, financial service providers, or even shopping centres – they owe no explicit duty of care to prevent foreseeable harm to users.

Under common law negligence principles, courts have been hesitant to impose such a duty. They view platforms as facilitators, not custodians of public safety. Without legislative intervention, victims harmed by online exploitation, self-harm content, bullying, grooming or deepfake abuse are left with little in terms of avenues for redress. This is precisely where a digital duty of care would intervene.

A digital duty of care would place a statutory obligation on platforms to take reasonable and proportionate steps to prevent foreseeable harm within their services, just as workplaces must adopt systems reasonably calculated to prevent harm. It does not mandate censorship or convert tech companies into moral arbiters. Instead, it obliges them to: assess risks, design systems safely, implement effective age-assurance, act quickly when harm emerges and document what they are doing to protect users – particularly children.

The duty is preventative, not punitive. It regulates platform behaviour, not speech. And just as workplaces must adopt systems reasonably calculated to prevent harm, platforms should be required to operate digital environments that are not foreseeably dangerous.

If Australia were to legislate a digital duty of care, it would not be acting alone. In fact, the global trend is now decisively moving towards the codification of duties of care for digital platforms.

In the EU, a duty of care has been specifically legislated under the EU Digital Services Act. It places varying obligations on categories of services, platforms, and providers to target illegal content, disinformation and transparent advertising.

For intermediary services, the legislation imposes an obligation to comply with orders to remove illegal content and to publish annual reports on their content removal and moderation activities. For huge online platforms and search engines, it creates obligations to install internal complaint handling systems regarding the removal of content, comply with enhanced transparency obligations, conduct an annual risk assessment, establish an independent compliance function and provide additional information and user optionality in relation to online advertising and recommender systems used on their platforms.

The UK Online Safety Act 2023 creates a similar duty of care of online platforms. It requires regulated services to conduct certain risk assessments at defined intervals, assess whether the service is likely to be accessed by children, and determine how likely children are to be harmed by content on the website, and to act against illegal or harmful content from their users.

Where to Next

Although the Albanese government has previously expressed its intention to develop and legislate a digital duty of care, little progress has been made insofar as draft legislation to bring that plan to fruition.

Regardless, the controversy around Grok demonstrates that technology has clearly outpaced existing child-protection frameworks, and without decisive, coordinated legal reform, including a digital duty of care, Australia will continue to see exponential increases in harm.

Leave a Reply

Your email address will not be published. Required fields are marked *

Ross Koffel

Request a free consultation