
ChatGPT-Induced Psychosis Isn't RealJul 29
if you’re using chatgpt to brainstorm sacrifices to bloodthirsty deities and it says some weird shit, that’s on you
Sep 19, 2025
“ChatGPT as your therapist?” asks Scientific American. “Here’s why that’s so risky.”
C. Vaile Wright, a licensed psychologist and senior director of the American Psychological Association’s Office of Health Care Innovation, told Scientific American last month that the psychological profession’s greatest concern about AI therapists is that “they’re actually being coded in a way to keep you on the platform for as long as possible because that’s the business model. And the way that they do that is by being unconditionally validating and reinforcing, almost to the point of sycophancy.”
The state of Illinois recently went so far as to ban AI therapy altogether, with the secretary of the Illinois Department of Financial & Professional Regulation declaring in August, “The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs that pull information from all corners of the internet to generate responses that harm patients.”
These contentions are interesting, but misleading. First, the term “AI therapist” can refer to anything from a person having a back-and-forth therapeutic conversation with, say, ChatGPT or Grok, to someone using a specially programmed mental health chatbot like Sonia, backed by Y Combinator, or Slingshot AI, backed by a16z. The former, totally unregulated and self-guided, is far more likely to affirm the user than the latter, which is specifically programmed to be a therapist — but Illinois’ ban affects both, making it counterproductive.
But there’s a deeper category error that opponents of AI therapists are making: namely, they are assuming contemporary therapy is healthy or good to begin with.