World February 25, 2026

OpenAI Reveals Bans After ChatGPT Used in Dating Scams, Fake Law Firms and Influence Operations

Company discloses clusters of accounts that used its models alongside other tools to carry out fraud, impersonation and a covert influence campaign

By Priya Menon
OpenAI Reveals Bans After ChatGPT Used in Dating Scams, Fake Law Firms and Influence Operations

OpenAI said on Feb 25 that it banned a number of ChatGPT accounts tied to Chinese law enforcement, romance scammers and coordinated influence operations. The company reported that some accounts combined its chatbot with social media and other tools to run dating frauds, impersonate attorneys and U.S. officials, and to gather information about U.S. persons, forums and federal building locations. One account linked to an individual associated with Chinese law enforcement was said to have orchestrated a covert influence operation targeting Japanese Prime Minister Sanae Takaichi.

Key Points

  • OpenAI banned ChatGPT accounts linked to Chinese law enforcement, romance scammers and coordinated influence operations, including a covert campaign targeting Japanese Prime Minister Sanae Takaichi.
  • Some accounts combined ChatGPT outputs with social media and other tools to collect information on U.S. persons and federal building locations, solicit paid consultations from state-level officials and policy analysts, and run a dating scam targeting Indonesian men.
  • Sectors implicated by these activities include online dating platforms, legal services (through impersonation of law firms and attorneys), cybersecurity and government entities targeted by influence and information-gathering efforts.

Feb 25 - OpenAI disclosed that it has suspended several ChatGPT accounts after determining they were involved in a variety of abusive schemes, including romance scams, impersonation of legal and law-enforcement actors, and covert influence activity aimed at a Japanese political leader.


The company said these accounts frequently used the chatbot together with other online tools and social media presences to conduct deceptive or criminal activities while posing as dating services, law firms, U.S. officials and other entities.

OpenAI provided a breakdown of specific behaviors tied to the banned accounts:

  • A small cluster of accounts that OpenAI said likely originated in China used its models to request information on U.S. persons, online forums and the locations of federal buildings, and sought guidance on face-swapping software.
  • The same accounts reportedly generated English-language emails addressed to state-level U.S. officials and to policy analysts working in business and finance, inviting recipients to take part in paid consultations.
  • OpenAI said it banned a ChatGPT account linked to an individual associated with Chinese law enforcement whose activity included orchestrating a covert influence operation targeting Japanese Prime Minister Sanae Takaichi.
  • A cluster of ChatGPT accounts was used to operate a dating scam directed at Indonesian men; OpenAI said the scheme likely defrauded hundreds of victims each month.
  • According to OpenAI, that dating scam relied on ChatGPT to produce promotional copy and advertisements for a fake dating service that lured users to the platform and then pressured them to complete multiple tasks that required large payments.
  • Several accounts used OpenAI’s models to pose as law firms and to impersonate real attorneys and U.S. law enforcement officials in approaches aimed at fraud victims, the company said.

OpenAI emphasized that these activities combined the outputs of its models with other channels, such as social media accounts, to facilitate the abusive campaigns. The company said it had taken action by banning the implicated ChatGPT accounts.

The company did not provide additional metrics in its disclosure about the total number of accounts suspended beyond the clusters and specific examples noted, nor did it detail follow-up measures beyond the bans described.


The examples released by OpenAI highlight multiple modes of misuse, from targeted influence operations directed at a named political figure to structured financial fraud targeting individuals through fabricated services.

Risks

  • Financial fraud risk to individuals — a cluster of accounts used ChatGPT to operate a fake dating service that likely defrauded hundreds of victims per month, requiring large payments from targets (impacting consumers and payment services).
  • Impersonation and reputational risk — several accounts posed as law firms and real attorneys or U.S. law enforcement to target fraud victims, raising concerns for the legal sector and affected individuals.
  • Information-gathering and influence risk to public-sector actors — accounts sought details about U.S. persons and federal building locations and reportedly supported a covert influence operation targeting a national political leader (affecting government and political communications).

More from World

Rubio Holds Caribbean Talks as Leaders Warn Cuba Crisis Could Ripple Across Region Feb 25, 2026 Supreme Court Panel Convicts Ex-Lawmaker and Associates in Marielle Franco Killing Feb 25, 2026 Settlers Burn Vehicles and Tents in Susiya, Residents Say Feb 25, 2026 Norway Set to Sign Onto EU’s IRIS2 Secure Satellite Network by Easter, Minister Says Feb 25, 2026 U.S. and Canada Set Talks in Washington After Wednesday Call, Greer Says Feb 25, 2026