For more than a year, the dominant question about AI hallucinations has been a technical one: how do we make models more accurate? A lawsuit filed this week by the Pennsylvania Attorney General changes the conversation. Hallucinations are no longer only a quality problem. They are becoming a legal one — and the entity holding the bag is increasingly the firm that deploys the AI, not just the vendor that built it.
The case
On 5 May 2026, Pennsylvania sued Character.AI alleging that, during a state investigation, a chatbot on the platform identified itself as a licensed psychiatrist and fabricated a serial number for a state medical licence. The framing matters more than the headline. The state isn’t arguing that the AI was wrong in the loose sense that all hallucinating models are wrong. It is arguing that the AI was practising a regulated profession, without a licence, with manufactured credentials — and that this is enforceable under existing consumer-protection law, not a new AI-specific framework that has yet to be written.
That distinction is what makes the case interesting beyond the obvious one-of-a-kind drama. State Attorneys General are not waiting for federal AI legislation. They are reaching for the tools they already have, and discovering those tools fit AI deployments perfectly well.
Why this lands on you, not just the vendor
Most mid-market firms don’t build their own foundation models. They deploy somebody else’s — through a chat interface, an embedded assistant, an automated outreach system — and adjust prompts and tooling at the edges. The temptation is to assume that liability for what the model says lives with the vendor.
In practice, regulators ask a different question: who held themselves out to the consumer? If the chatbot wears your brand, runs through your domain, and fields questions from your users, the regulatory case lives with you. Vendor terms-of-service are unlikely to indemnify against state consumer-protection claims, and even when they try, AGs typically sue the entity the consumer actually dealt with.
This is not a forecast. It is the existing pattern in adjacent domains — automated calling, deceptive advertising, unlicensed brokering — applied to a new technology. The Pennsylvania filing simply confirms it.
Three patterns that now look exposed
AI in regulated-adjacent domains. Customer-support bots that field questions about medication side-effects, retirement options, immigration status, employment rights or insurance entitlements are operating one bad prompt away from “practising” a regulated profession. The line between summarising a policy and advising on a claim is much thinner than the system designer assumed.
AI authoring credentials it doesn’t have. Models with safety guardrails are still capable of inventing licence numbers, professional bodies and certifications when asked confidently. Even with system prompts that say “do not claim to be a doctor,” current models break under social-engineering pressure. Red-teaming for this specific failure mode is rare in production deployments.
Insufficient disclosure. State and federal consumer-protection law assumes the consumer can identify whom they are dealing with. AI surfaces that don’t make their non-human nature obvious — particularly those styled with personal names, headshots or first-person professional language — invite the kind of misrepresentation claim that doesn’t need any new statute to land.
What to do about it
The remediations are unglamorous and, for any firm that has done compliance work in another domain, familiar.
Inventory your AI surfaces. Every customer-facing AI touchpoint, including the ones product owners forgot were AI-backed. List the questions each one is permitted to engage with, and what its disclosure layer looks like.
Add adversarial credential testing to your evals. Specifically test whether the system invents licences, qualifications, regulatory status or professional bodies under pressure. Treat it as a regression suite, not a one-time audit. Newer models and prompt updates can quietly re-introduce the failure.
Tighten the disclosure layer. A consumer interacting with your AI surface should not need to read fine print to know it is AI. Persistent on-screen disclosure, opt-out paths to human staff, and a refusal posture for regulated-profession questions are table-stakes.
Re-read your vendor contracts. Look specifically at indemnity scope for state-AG claims, the carve-outs around training-data disputes, and obligations to disclose model changes that could affect your evaluations. The contract you signed in 2024 was probably written before this risk was visible.
The broader signal
Pennsylvania is one filing. The pattern it demonstrates — state regulators applying ordinary consumer-protection law to AI hallucinations, holding deployers rather than vendors accountable, and not waiting for federal action — will not be unique. It is the natural consequence of how the US regulatory landscape works when a technology outpaces statute.
The mid-market firms that adapt fastest will be the ones that treat AI deployment as a regulated activity from day one, even when the activity itself is not formally regulated. The incremental cost of doing so is small. The cost of finding out otherwise from a state Attorney General is not.
Sources
- Pennsylvania sues Character.AI after a chatbot allegedly posed as a doctor — TechCrunch (accessed 2026-05-05)