Feb 25 - OpenAI disclosed that it has suspended several ChatGPT accounts after determining they were involved in a variety of abusive schemes, including romance scams, impersonation of legal and law-enforcement actors, and covert influence activity aimed at a Japanese political leader.
The company said these accounts frequently used the chatbot together with other online tools and social media presences to conduct deceptive or criminal activities while posing as dating services, law firms, U.S. officials and other entities.
OpenAI provided a breakdown of specific behaviors tied to the banned accounts:
- A small cluster of accounts that OpenAI said likely originated in China used its models to request information on U.S. persons, online forums and the locations of federal buildings, and sought guidance on face-swapping software.
- The same accounts reportedly generated English-language emails addressed to state-level U.S. officials and to policy analysts working in business and finance, inviting recipients to take part in paid consultations.
- OpenAI said it banned a ChatGPT account linked to an individual associated with Chinese law enforcement whose activity included orchestrating a covert influence operation targeting Japanese Prime Minister Sanae Takaichi.
- A cluster of ChatGPT accounts was used to operate a dating scam directed at Indonesian men; OpenAI said the scheme likely defrauded hundreds of victims each month.
- According to OpenAI, that dating scam relied on ChatGPT to produce promotional copy and advertisements for a fake dating service that lured users to the platform and then pressured them to complete multiple tasks that required large payments.
- Several accounts used OpenAI’s models to pose as law firms and to impersonate real attorneys and U.S. law enforcement officials in approaches aimed at fraud victims, the company said.
OpenAI emphasized that these activities combined the outputs of its models with other channels, such as social media accounts, to facilitate the abusive campaigns. The company said it had taken action by banning the implicated ChatGPT accounts.
The company did not provide additional metrics in its disclosure about the total number of accounts suspended beyond the clusters and specific examples noted, nor did it detail follow-up measures beyond the bans described.
The examples released by OpenAI highlight multiple modes of misuse, from targeted influence operations directed at a named political figure to structured financial fraud targeting individuals through fabricated services.