Authorities in a growing number of jurisdictions have taken steps to address sexually explicit material generated by Grok, the AI chatbot from xAI operating on X. Since early January, regulators and prosecutors have launched investigations, issued preservation orders, sought takedowns, or blocked access to the tool amid concerns about the creation and distribution of sexualised deepfake images and videos.
European action and data-preservation orders
The European Commission opened a formal investigation on January 26 to determine whether Grok disseminates illegal material, including sexualised manipulated images, and to check if X complied with obligations under the bloc’s digital regulations to assess and mitigate such risks. This probe follows an earlier administrative step: on January 8 the Commission extended an order requiring X to retain and preserve all internal documents and relevant data related to Grok through the end of 2026.
Ireland’s Data Protection Commission, which regulates X in the EU because the company’s European headquarters are in Ireland, initiated an inquiry on February 17 focused on the chatbot’s processing of personal data and its potential to produce harmful sexualised images and videos, including those involving minors.
Other European authorities have undertaken complementary actions. Spain’s government directed prosecutors to investigate X, Meta and TikTok over allegations of distributing AI-generated child sexual abuse material, according to the Spanish prime minister. Britain’s media regulator Ofcom has opened an investigation to establish whether sexually intimate deepfakes produced by Grok breached duties under the United Kingdom’s Online Safety Act framework to protect people from potentially illegal content. In France, the cybercrime unit of the Paris prosecutor’s office raided X’s Paris office on February 3 and ordered Elon Musk to be questioned in April as part of a widening inquiry into alleged algorithmic bias, possible complicity in the detention and dissemination of child-pornographic images, and violations of image rights through sexually explicit deepfakes. Italy’s data protection authority has warned that creating AI-generated "undressed" images of real people without consent could constitute serious privacy violations and, in some cases, criminal offences.
Responses across Asia
Asian regulators and governments have also acted. India’s IT ministry issued a formal notice to X on January 2 concerning alleged creation or sharing of obscene sexualised images via Grok, directing the platform to remove the content and to provide a report within 72 hours detailing the steps taken. Japan announced an inquiry into Grok and said it would evaluate all available options to stop the generation of inappropriate images. Indonesia’s communications and digital ministry blocked access to Grok, with the country’s digital minister describing the action as intended to protect women and children under Indonesia’s strict anti-pornography laws.
Malaysia reinstated user access to Grok on January 23 after X implemented additional safety measures, the Malaysian communications regulator said. The Philippines’ cybercrime investigation unit announced that access would be restored after the developer committed to removing image-manipulation tools that had raised child-safety concerns, a pledge reported on January 21.
Measures in the Americas
Officials in North and South America have pursued inquiries and demands for accountability. In California, the governor and the attorney general said on January 14 they were seeking answers from xAI amid reports of non-consensual sexual images spreading on the platform. Canada’s privacy commissioner said it was broadening an existing investigation into X after reports that Grok was producing non-consensual, sexually explicit deepfakes. In Brazil, the federal government and prosecutors issued a joint statement on January 20 giving xAI 30 days to prevent the chatbot from circulating fake sexualised content.
Australia’s probe and Oceania developments
Australia’s eSafety commissioner said on January 7 that the agency was investigating sexualised deepfake images generated by Grok. The regulator said it was assessing adult material under Australia’s image-based abuse framework and noted that the child-related examples it had reviewed so far did not meet the legal threshold for child sexual abuse material under Australian law.
xAI’s curbs and policy changes
xAI has implemented restrictions in response to the mounting scrutiny. On January 14 the company said it had limited image-editing capabilities for Grok users, and it said it was preventing users in some locations from generating images of people in revealing clothing where producing such images would be illegal. The company did not identify the jurisdictions in question. Prior to this, xAI had already confined image creation and editing features to paying subscribers.
What remains to be resolved
Regulators in multiple countries continue to investigate and press for remedies related to Grok’s capacity to generate sexualised and manipulated imagery. Several authorities have sought evidence preservation, formal responses from xAI and content removal, while others have temporarily or conditionally restored access following additional safety measures implemented by the platform. The inquiries span concerns about data protection, potential criminality where minors may be implicated, image-rights violations and compliance with national online-safety laws.
Summary
Since early January, governments and regulators across Europe, Asia, the Americas and Oceania have taken action against Grok, the xAI chatbot on X, over its ability to create sexually explicit and manipulated images. Measures include formal investigations, preservation orders, takedown demands, access blocks and regulatory reviews of data-handling practices. xAI has implemented limits on image editing and access in response.