
Last week, Illinois Governor JB Pritzker signed a law that makes it clear: artificial intelligence can’t provide therapy.
The measure — called the Wellness and Oversight for Psychological Resources Act — is the first in the U.S. to ban AI from acting as a standalone therapist and to set limits on how it can assist licensed professionals. In Illinois, mental health care will remain in human hands.
A First-of-Its-Kind Law
The bill, known as HB 1806, makes it illegal for AI to engage in any kind of therapeutic communication or decision-making. It also prohibits the creation or use of treatment plans generated by AI unless a licensed human professional has reviewed and approved them. Any person or business offering mental health services must be certified — and advertising AI-driven therapy without a license could now cost them up to $10,000 per violation.
“We have already heard the horror stories when artificial intelligence pretends to be a licensed therapist,” Representative Bob Morgan, who introduced the bill, told Mashable. “Individuals in crisis unknowingly turned to AI for help and were pushed toward dangerous, even lethal, behaviors.”
Morgan’s concerns are justified. Earlier this year, the American Psychological Association cited two lawsuits involving minors who turned to AI chatbots for therapy. In one case, a boy died by suicide. In another, a child attacked his parents. Neither case has been resolved, but the APA has taken a firm stance, warning that AI masquerading as a therapist can put lives at risk.
“This legislation stands as our commitment to safeguarding the well-being of our residents by ensuring that mental health services are delivered by trained experts who prioritize patient care above all else,” said Mario Treto Jr., secretary of the Illinois Department of Financial and Professional Regulation, which is tasked with enforcing the law.
The bill passed unanimously in both the state House and Senate — a rare show of bipartisan unity in a polarized age. It went into effect immediately.
Setting Boundaries
The law doesn’t ban AI from mental health entirely. It still allows for administrative and logistical uses. Chatbots can help schedule appointments, process billing, or even take therapy notes — but they must steer clear of therapeutic conversations or decisions.
That distinction matters, especially as AI tools become more sophisticated and increasingly embedded in online mental health platforms. Several major chatbot platforms — including ones built by companies like OpenAI — have begun incorporating mental health features or support responses. But they often lack meaningful oversight, and many users don’t realize they’re speaking with a machine.
Illinois is not alone in acting.
In June, Nevada passed a law banning AI from providing therapy or behavioral health services in public schools. Earlier this year, Utah implemented transparency requirements for mental health chatbots, forcing them to clearly disclose they are not human and to avoid exploiting user data for targeted advertising. New York’s new law, set to take effect in November, requires AI companions to redirect users expressing suicidal thoughts to trained crisis counselors.
These developments come as the federal government takes a cautious approach. The Trump administration recently proposed a 10-year moratorium on new state-level AI regulation, raising the stakes for local laws like Illinois’. Yet the legal momentum suggests that states — not Washington — are likely to lead on this issue.
AI companies, meanwhile, continue to push the limits. OpenAI recently announced new tools to detect mental distress in users and suggested its models could soon prompt users to take breaks during emotionally intense conversations. But even those tools remain in a gray area: no one licenses or regulates them, and — as of now — no rules require them to follow medical standards.
What’s at Stake?
Mental health services in the U.S. are stretched thin. Wait times are long. Providers are in short supply. For many people — especially those in underserved communities — a free chatbot can feel like their only option.
But that accessibility comes with real risks.
The danger, according to lawmakers and psychologists, is that unregulated AI therapy can offer a false sense of support — and potentially make things worse. “The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs that pull information from all corners of the internet to generate responses that harm patients,” Treto said in a press release.
If a licensed therapist crosses a line, they can be subject to investigations, sanctions, and even lose their license. But when a chatbot causes harm, who’s responsible?
Illinois lawmakers aren’t waiting to find out the answer in court.
“Every day, AI develops further in our country without the guardrails necessary to protect people,” said Rep. Morgan. “By passing HB 1806, we are taking action to pause the unchecked expansion of AI in mental healthcare and putting necessary regulation in place before more harm is done.”
The Bigger Picture
Illinois’ law marks a turning point. It may become a model for other states grappling with how to handle AI in sensitive, high-stakes domains like health care, education, and law.
It also signals a cultural shift. As AI becomes more capable — and more humanlike — lawmakers and professionals alike are drawing firmer lines about what machines should and shouldn’t do.
For now, in Illinois, therapy is off the table. The machines can schedule your appointment. But they can’t legally ask how you’re feeling — at least, not without a licensed human watching closely.