Australia's internet safety regulator has indicated it may press gatekeeper platforms - including search engines and app stores - to stop routing access to artificial intelligence services that fail to demonstrate they can verify a user's age, after a review found a majority of popular AI tools had not made public plans to comply with a new legal code.
The warning comes as Canberra moves aggressively to impose age-based limits on the kind of AI-generated content available to minors. From March 9, internet services used in Australia - including search tools such as OpenAI's ChatGPT and other chat-based assistants - must ensure that Australians under 18 cannot receive pornography, extreme violence, self-harm or eating disorder content, or risk fines of up to A$49.5 million (about $35 million).
A spokesperson for the commissioner said that "eSafety will use the full range of our powers where there is non-compliance," and explicitly mentioned potential "action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services" as part of enforcement tools the regulator may deploy.
The regulator's move follows a review of the 50 most popular text-based AI products which found that more than half had not publicly disclosed any steps to meet the age-assurance obligations ahead of the March 9 deadline. That review assessed services through a set of indicators including how platforms responded to prompts asking for restricted content, moderation and content policies, terms of service, and public statements.
Among the 50 top products examined, only nine had rolled out or announced plans to implement age assurance systems. Another 11 had either blanket content filters or said they would block all Australians from accessing their services - a measure that would meet the new code by preventing restricted content from reaching any user in the country. That left 30 platforms with no visible steps to align with the new rules.
Large chat-based search assistants such as ChatGPT, Replika and Anthropic's Claude were among those that had begun implementing age assurance systems or blanket filters. Character.AI, a provider of companion chatbot services, has already cut off open-ended chat for users under 18. Companion chatbot providers Candy AI, Pi, Kindroid and Nomi informed the regulator that they planned to comply, while HammerAI said it would initially block access from Australia to meet the code's requirements.
However, the Reuters review found that among companion chatbots the majority - roughly three-quarters - had no functioning or planned filtering or age verification, and one-sixth lacked a published email address for reporting suspected breaches, an element that the code also requires.
The review flagged that Elon Musk's chat-based search tool Grok had no age assurance measures or text-based content filters. Grok is also under global investigation for suspected failures to prevent production of synthetic sexualised imagery of children, the review found. Grok's parent company, xAI, did not respond to requests for comment.
OpenAI and Character.AI have been named in wrongful death lawsuits related to interactions with young users, and OpenAI disclosed this week that it had deactivated the ChatGPT account of a teenage mass shooting suspect in Canada months prior to the attack without notifying authorities.
While Australia has not recorded chatbot-linked violence or self-harm incidents, the regulator said it had received reports of children as young as 10 spending up to six hours a day interacting with AI-powered chat tools. The eSafety spokesperson cautioned that the regulator was "concerned that AI companies are leveraging emotional manipulation, anthropomorphism and other advanced techniques to entice, entrance and entrench young people into excessive chatbot usage."
Responses from major platform operators have been limited. Apple did not provide a response to direct questions, but its website stated last week that the company would deploy "reasonable methods" to prevent minors from downloading 18+ apps in Australia and in other jurisdictions introducing age restrictions, without specifying the exact mechanisms. Google, which is Australia's dominant search engine provider and the second-largest app store operator, declined to comment.
Jennifer Duxbury, head of policy at the internet industry group DIGI and a lead contributor to the drafting of the AI code before it was finalised by the regulator, said the regulator had been trying to alert chatbot operators to the new rules. She emphasised, however, that "ultimately any service operating in Australia is responsible for understanding its legal obligations and ensuring it meets them."
Academic observers cited in the review said the findings were not unexpected. Lisa Given, director of RMIT University's Centre for Human-AI Information Environments, said the results aligned with an approach to product development that has not always prioritised potential harms or safety controls. She expressed concern that current deployments felt like large-scale testing of capabilities while probing the boundaries of societal norms about acceptable risk and exposure.
As the March 9 enforcement date approaches, the regulator's readiness to consider action against app stores and search engines signals that the new code is likely to shift some of the compliance burden onto intermediaries that serve as gatekeepers to distribution and discovery. For AI vendors, app store operators and search providers, that dynamic raises questions about implementation logistics and potential commercial consequences if access is restricted or services are blocked in order to meet the regulatory requirements.
Context and immediate implications
The code's entry into force sets a clear legal requirement for restricting certain categories of AI-generated content to adults in Australia. Platforms that host or provide access to conversational AI services must demonstrate technical and operational measures for age assurance, or choose to apply blanket restrictions for Australian users. Non-compliance could trigger significant financial penalties and regulatory action aimed at platform gatekeepers.