homehome Home chatchat Notifications


Illinois Just Became the First State to Ban AI From Acting as a Therapist

The law aims to keep mental health care in human hands — not algorithms

Tudor Tarita
August 8, 2025 @ 7:12 pm

share Share

Illinois officials say they designed the new law to prevent tragedies
Illinois officials say they designed the new law to prevent tragedies. Image generated using Sora/ChatGPT

Last week, Illinois Governor JB Pritzker signed a law that makes it clear: artificial intelligence can’t provide therapy.

The measure — called the Wellness and Oversight for Psychological Resources Act — is the first in the U.S. to ban AI from acting as a standalone therapist and to set limits on how it can assist licensed professionals. In Illinois, mental health care will remain in human hands.

A First-of-Its-Kind Law

The bill, known as HB 1806, makes it illegal for AI to engage in any kind of therapeutic communication or decision-making. It also prohibits the creation or use of treatment plans generated by AI unless a licensed human professional has reviewed and approved them. Any person or business offering mental health services must be certified — and advertising AI-driven therapy without a license could now cost them up to $10,000 per violation.

“We have already heard the horror stories when artificial intelligence pretends to be a licensed therapist,” Representative Bob Morgan, who introduced the bill, told Mashable. “Individuals in crisis unknowingly turned to AI for help and were pushed toward dangerous, even lethal, behaviors.”

Morgan’s concerns are justified. Earlier this year, the American Psychological Association cited two lawsuits involving minors who turned to AI chatbots for therapy. In one case, a boy died by suicide. In another, a child attacked his parents. Neither case has been resolved, but the APA has taken a firm stance, warning that AI masquerading as a therapist can put lives at risk.

“This legislation stands as our commitment to safeguarding the well-being of our residents by ensuring that mental health services are delivered by trained experts who prioritize patient care above all else,” said Mario Treto Jr., secretary of the Illinois Department of Financial and Professional Regulation, which is tasked with enforcing the law.

The bill passed unanimously in both the state House and Senate — a rare show of bipartisan unity in a polarized age. It went into effect immediately.

Setting Boundaries

The law doesn’t ban AI from mental health entirely. It still allows for administrative and logistical uses. Chatbots can help schedule appointments, process billing, or even take therapy notes — but they must steer clear of therapeutic conversations or decisions.

That distinction matters, especially as AI tools become more sophisticated and increasingly embedded in online mental health platforms. Several major chatbot platforms — including ones built by companies like OpenAI — have begun incorporating mental health features or support responses. But they often lack meaningful oversight, and many users don’t realize they’re speaking with a machine.

Illinois is not alone in acting.

In June, Nevada passed a law banning AI from providing therapy or behavioral health services in public schools. Earlier this year, Utah implemented transparency requirements for mental health chatbots, forcing them to clearly disclose they are not human and to avoid exploiting user data for targeted advertising. New York’s new law, set to take effect in November, requires AI companions to redirect users expressing suicidal thoughts to trained crisis counselors.

These developments come as the federal government takes a cautious approach. The Trump administration recently proposed a 10-year moratorium on new state-level AI regulation, raising the stakes for local laws like Illinois’. Yet the legal momentum suggests that states — not Washington — are likely to lead on this issue.

AI companies, meanwhile, continue to push the limits. OpenAI recently announced new tools to detect mental distress in users and suggested its models could soon prompt users to take breaks during emotionally intense conversations. But even those tools remain in a gray area: no one licenses or regulates them, and — as of now — no rules require them to follow medical standards.

What’s at Stake?

Mental health services in the U.S. are stretched thin. Wait times are long. Providers are in short supply. For many people — especially those in underserved communities — a free chatbot can feel like their only option.

But that accessibility comes with real risks.

The danger, according to lawmakers and psychologists, is that unregulated AI therapy can offer a false sense of support — and potentially make things worse. “The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs that pull information from all corners of the internet to generate responses that harm patients,” Treto said in a press release.

If a licensed therapist crosses a line, they can be subject to investigations, sanctions, and even lose their license. But when a chatbot causes harm, who’s responsible?

Illinois lawmakers aren’t waiting to find out the answer in court.

“Every day, AI develops further in our country without the guardrails necessary to protect people,” said Rep. Morgan. “By passing HB 1806, we are taking action to pause the unchecked expansion of AI in mental healthcare and putting necessary regulation in place before more harm is done.”

The Bigger Picture

Illinois’ law marks a turning point. It may become a model for other states grappling with how to handle AI in sensitive, high-stakes domains like health care, education, and law.

It also signals a cultural shift. As AI becomes more capable — and more humanlike — lawmakers and professionals alike are drawing firmer lines about what machines should and shouldn’t do.

For now, in Illinois, therapy is off the table. The machines can schedule your appointment. But they can’t legally ask how you’re feeling — at least, not without a licensed human watching closely.

share Share

Cooking From Scratch Helps You Lose More Fat Even if the Calories Are the Same As Processed Foods

Minimally processed diets helped people lose more fat and resist cravings more effectively.

Scientists Gave People a Fatty Milkshake. It Turned Out To Be a "Brain Bomb"

A greasy takeaway may seem like an innocent Friday night indulgence. But our recent research suggests even a single high-fat meal could impair blood flow to the brain, potentially increasing the risk of stroke and dementia. Dietary fat is an important part of our diet. It provides us with a concentrated source of energy, transports […]

Drinking Coffee at Night Could Be Making You More Impulsive and Reckless

The implications are especially important for people who work overnight shifts.

A Century-Old Lung in a Jar Yields Clues to the Spanish Flu’s Lethal Surge

Scientists decode how the 1918 flu rapidly adapted to humans—much earlier than thought.

This Common Ingredient in Chocolate May Outperform Tamiflu Against the Flu In New Drug Combo

Researchers uncover a potent, resistance-proof flu treatment—starting with bacteria and ending in mice.

Why Are Some Doctors Pretending to Do CPR? You Should Know About 'Slow Code'

Although it sounds wrong, performative CPR is sometimes the most humane thing to do.

Scientists Created a 3D Printing Resin You Can Reuse Forever

The new resin can be reused indefinitely without losing strength or quality.

Did Neanderthals Survive the Ice Age by Eating Rotting Meat and Maggots?

You may find it hard to digest, but Neanderthals may have loved their meat rotten, and full of maggots.

What Would Happen If Everyone in the World Turned On The Lights At the Same Time?

Power grids could likely handle the surge of demand, but all that light would pollute dark zones nearby.

A Massive Fraud Ring Is Publishing Thousands of Fake Studies and the Problem is Exploding. “These Networks Are Essentially Criminal Organizations”

Organized misconduct is rapidly poisoning the global scientific record.