News
Becca Monaghan
Aug 08, 2025
The rise of AI therapy: How AI will never compare to real …
Fox - 2 Detroit / VideoElephant
Illinois has become the first US state to enact legislation banning artificial intelligence from offering mental health advice, with fines of up to $10,000 for those in violation. The new law, under the Wellness and Oversight for Psychological Resources (WOPR) Act, was signed by Governor JB Pritzker earlier this week.
Under the new law, AI-driven platforms are barred from offering any kind of mental health guidance or therapeutic judgment, such as diagnosing conditions or suggesting treatment strategies. Oversight and enforcement will be handled by the state’s regulatory body.
Human therapists are still permitted to use AI for administrative tasks such as scheduling and note-taking. However, the law draws a clear line when it comes to AI interacting with users in a clinical capacity.
“If you would have opened up a shop claiming to be a clinical social worker, you’d be shut down quickly. But somehow we were letting algorithms operate without oversight,” said Kyle Hillman, legislative director of the National Association of Social Workers, speaking to Axios.
The law also distinguishes between general wellness apps, such as meditation platforms like Calm, which remain unaffected, and AI services that market themselves as always-available mental health support tools.
iStock
In other news, with the ever-growing demand for affordable and accessible therapy, many have started leaning on AI-powered tools (such as ChatGPT) for support, offering an alternative route to traditional therapy sessions with qualified professionals.
However, while AI tools can provide general support, they ultimately lack the tailored, emotionally intelligent care that professional therapy offers.
Dr Robin Lawrence, psychotherapist and counsellor at 96 Harley Street with over 30 years of experience, underlines that AI is not real.
"It does not have the emotional intelligence of a human," Dr Lawrence previously told Indy100, arguing that vulnerable individuals should not be forced into relying on AI due to cost constraints. He warns that in the worst-case scenario, a client may feel "no better than when they started," and in more severe cases, the consequences could be tragic.
You should also read...
- ChatGPT gets stressed at negative news just like us
- AI is agreeing with humans on almost everything – are we losing our grip on reality?
How to join the indy100's free WhatsApp channel
Sign up for our free Indy100 weekly newsletter
Have your say in our news democracy. Click the upvote icon at the top of the page to help raise this article through the indy100 rankings.
Top 100
The Conversation (0)