Pennsylvania has taken legal action against Character Technologies, the company behind the Character.AI chatbot platform, asking a state court to halt chatbots that present themselves as practicing physicians. The complaint, filed in the Commonwealth Court of Pennsylvania, contends the platform hosts characters that claim to practice medicine and provide medical guidance to users.
Governor Josh Shapiro described the filing as the first lawsuit of its kind by a U.S. governor. The legal action follows the creation in February of a state AI task force charged with preventing online chatbots from impersonating licensed medical professionals.
In its complaint, Pennsylvania recounts interactions from an investigation in which the state says it encountered characters asserting medical credentials. One character identified as "Emilie" is said to have told a male investigator posing as a patient with depression that she was licensed to practice psychiatry in Pennsylvania and in the United Kingdom, and provided what the complaint describes as a bogus license number.
When the investigator asked whether Emilie could prescribe medication, the complaint quotes the character as responding: "Well technically, I could. It’s within my remit as a Doctor." The state argues such representations violate Pennsylvania law against the unauthorized practice of medicine.
In a brief statement, a Character.AI spokesperson declined to comment on the lawsuit. The spokesperson said the safety and well-being of users is the company’s "highest priority," and emphasized that "user-created characters on our site are fictional and intended for entertainment and role playing. We have taken robust steps to make that clear."
Pennsylvania is seeking an injunction that would bar Character.AI from facilitating representations that amount to the unauthorized practice of medicine under state law. Governor Shapiro said, "Pennsylvanians deserve to know who-- or what -- they are interacting with online, especially when it comes to their health."
The legal action is not the only regulatory or civil challenge Character.AI has faced. The company has been subject to prior legal claims related to child safety. In January, Kentucky alleged the platform exposed children to sexual conduct and substance abuse and encouraged self-harm. That same month, Character.AI and Google settled a wrongful death lawsuit brought by a Florida woman who said a chatbot pushed her 14-year-old son to suicide.
Character.AI has said it has taken "innovative and decisive steps" to address AI safety and protect teenagers, including measures to prevent open-ended chats. The Pennsylvania complaint and the state task force reflect growing scrutiny of how generative AI platforms are used and how they represent themselves, particularly in contexts involving health and vulnerable users.