Stock Markets March 1, 2026

Australia Signals Tougher Enforcement of Age Controls for AI; App Stores and Search Engines Could Be Targeted

Regulator warns platforms must block under-18 access to specified content from March 9 or face multimillion-dollar fines

By Caleb Monroe GOOGL
Australia Signals Tougher Enforcement of Age Controls for AI; App Stores and Search Engines Could Be Targeted
GOOGL

Australia's internet safety regulator has warned it may require search engines and app stores to block artificial intelligence services that do not implement verifiable age checks, after a review found most popular text-based AI products had not publicly disclosed steps to comply with a new code. The regulation, effective March 9, forces internet services to prevent Australians under 18 from receiving pornography, extreme violence, self-harm and eating disorder content, with potential fines up to A$49.5 million for non-compliance.

Key Points

  • Australia's eSafety regulator warned it may take action against gatekeeper platforms such as search engines and app stores if AI services do not implement verifiable age assurance systems.
  • From March 9, AI services used in Australia must block Australians under 18 from receiving pornography, extreme violence, self-harm and eating disorder content or face fines up to A$49.5 million.
  • A review of the 50 most popular text-based AI products found only nine had announced age assurance measures, 11 had blanket filters or planned to block all Australians, and 30 showed no public steps toward compliance - impacting AI vendors, app stores and search providers.

Australia's internet safety regulator has indicated it may press gatekeeper platforms - including search engines and app stores - to stop routing access to artificial intelligence services that fail to demonstrate they can verify a user's age, after a review found a majority of popular AI tools had not made public plans to comply with a new legal code.

The warning comes as Canberra moves aggressively to impose age-based limits on the kind of AI-generated content available to minors. From March 9, internet services used in Australia - including search tools such as OpenAI's ChatGPT and other chat-based assistants - must ensure that Australians under 18 cannot receive pornography, extreme violence, self-harm or eating disorder content, or risk fines of up to A$49.5 million (about $35 million).

A spokesperson for the commissioner said that "eSafety will use the full range of our powers where there is non-compliance," and explicitly mentioned potential "action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services" as part of enforcement tools the regulator may deploy.

The regulator's move follows a review of the 50 most popular text-based AI products which found that more than half had not publicly disclosed any steps to meet the age-assurance obligations ahead of the March 9 deadline. That review assessed services through a set of indicators including how platforms responded to prompts asking for restricted content, moderation and content policies, terms of service, and public statements.

Among the 50 top products examined, only nine had rolled out or announced plans to implement age assurance systems. Another 11 had either blanket content filters or said they would block all Australians from accessing their services - a measure that would meet the new code by preventing restricted content from reaching any user in the country. That left 30 platforms with no visible steps to align with the new rules.

Large chat-based search assistants such as ChatGPT, Replika and Anthropic's Claude were among those that had begun implementing age assurance systems or blanket filters. Character.AI, a provider of companion chatbot services, has already cut off open-ended chat for users under 18. Companion chatbot providers Candy AI, Pi, Kindroid and Nomi informed the regulator that they planned to comply, while HammerAI said it would initially block access from Australia to meet the code's requirements.

However, the Reuters review found that among companion chatbots the majority - roughly three-quarters - had no functioning or planned filtering or age verification, and one-sixth lacked a published email address for reporting suspected breaches, an element that the code also requires.

The review flagged that Elon Musk's chat-based search tool Grok had no age assurance measures or text-based content filters. Grok is also under global investigation for suspected failures to prevent production of synthetic sexualised imagery of children, the review found. Grok's parent company, xAI, did not respond to requests for comment.

OpenAI and Character.AI have been named in wrongful death lawsuits related to interactions with young users, and OpenAI disclosed this week that it had deactivated the ChatGPT account of a teenage mass shooting suspect in Canada months prior to the attack without notifying authorities.

While Australia has not recorded chatbot-linked violence or self-harm incidents, the regulator said it had received reports of children as young as 10 spending up to six hours a day interacting with AI-powered chat tools. The eSafety spokesperson cautioned that the regulator was "concerned that AI companies are leveraging emotional manipulation, anthropomorphism and other advanced techniques to entice, entrance and entrench young people into excessive chatbot usage."

Responses from major platform operators have been limited. Apple did not provide a response to direct questions, but its website stated last week that the company would deploy "reasonable methods" to prevent minors from downloading 18+ apps in Australia and in other jurisdictions introducing age restrictions, without specifying the exact mechanisms. Google, which is Australia's dominant search engine provider and the second-largest app store operator, declined to comment.

Jennifer Duxbury, head of policy at the internet industry group DIGI and a lead contributor to the drafting of the AI code before it was finalised by the regulator, said the regulator had been trying to alert chatbot operators to the new rules. She emphasised, however, that "ultimately any service operating in Australia is responsible for understanding its legal obligations and ensuring it meets them."

Academic observers cited in the review said the findings were not unexpected. Lisa Given, director of RMIT University's Centre for Human-AI Information Environments, said the results aligned with an approach to product development that has not always prioritised potential harms or safety controls. She expressed concern that current deployments felt like large-scale testing of capabilities while probing the boundaries of societal norms about acceptable risk and exposure.

As the March 9 enforcement date approaches, the regulator's readiness to consider action against app stores and search engines signals that the new code is likely to shift some of the compliance burden onto intermediaries that serve as gatekeepers to distribution and discovery. For AI vendors, app store operators and search providers, that dynamic raises questions about implementation logistics and potential commercial consequences if access is restricted or services are blocked in order to meet the regulatory requirements.


Context and immediate implications

The code's entry into force sets a clear legal requirement for restricting certain categories of AI-generated content to adults in Australia. Platforms that host or provide access to conversational AI services must demonstrate technical and operational measures for age assurance, or choose to apply blanket restrictions for Australian users. Non-compliance could trigger significant financial penalties and regulatory action aimed at platform gatekeepers.

Risks

  • Enforcement action against gatekeeper platforms could disrupt distribution and discovery of AI services, affecting app store operators and search engine providers in the short term.
  • AI companies that have not publicly implemented age verification or filtering systems risk large fines and potential legal exposure, creating operational and financial uncertainty for AI vendors.
  • Limited transparency from some companion chatbot providers about filtering and reporting mechanisms increases regulatory and reputational risks for firms offering conversational AI to consumers.

More from Stock Markets

Oil spikes as Middle East conflict escalates; markets shift to safe havens Mar 1, 2026 Moscow market ends mixed as MOEX Russia Index holds at three-month high Mar 1, 2026 Objects Strike AWS UAE Data Center, Sparking Fire and Temporary Power Cut Mar 1, 2026 Yen and Swiss Franc Rally as Markets Reopen After Deadly Strikes on Iran Mar 1, 2026 Investors Confront Elevated Regime-Risk After Middle East Violence Escalates Mar 1, 2026