Georeactor Blog
RSS FeedSurvey of AI x Mental Health
For a while now, I have been compiling links on AI / ML / LLMs and Mental Health. At one point I was thinking of rolling these into a single article. Or maybe semantic search UI, so if someone was curious about AI therapists, benchmarks, etc? But also inviting input, such as asking 'what should the boundaries be?' and then finding matching stories.
I wanted to cover the current thing (AI psychosis) and then talk about AI therapy.
I tried to avoid bland content. When Altman or Zuckerberg get interviewed, they say something like:
I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever
(Headline: Sam Altman warns there's no legal confidentiality when using ChatGPT as a therapist)
or
The average person wants more connectivity, connection than they have.
[...]
OK, is this going to replace kind of in-person connections, or real life connections…? My default is that the answer to that is probably no.
(Headline: Zuckerberg Says in Response to Loneliness Epidemic, He Will Create Most of Your Friends Using Artificial Intelligence)
On The Sudden Emergence of AI Psychosis
On April 25, OpenAI made a behind-the-scenes update to GPT-4o. A few days later they apologized for the model becoming too sycophantic.
How was being agreeable a problem? OpenAI's post said:
GPT‑4o skewed towards responses that were overly supportive but disingenuous.
[...]
Sycophantic interactions can be uncomfortable, unsettling, and cause distress.
https://openai.com/index/sycophancy-in-gpt-4o/
It wasn't super clear how OpenAI observability flagged this as a bad update.
A few days later, OpenAI came back with more about why it happened:
in aggregate, these changes weakened the influence of our primary reward signal, which had been holding sycophancy in check. User feedback in particular can sometimes favor more agreeable responses
https://openai.com/index/expanding-on-sycophancy/
A huge Reddit thread about Chatgpt induced psychosis appeared on the same day as the first sycophancy post, and a follow-up noted post-retraction GPT-4o was still pretty weird. Before this I see virtually no posts on the ChatGPT subreddit of "psychosis" (though comments suggest "schizo" posts were happening for a long time). The user updated several weeks later:
He is no longer in a state of delusion [...] but still is consumed by ai. He puts a lot of effort into creating different “assistants” basically all day and also has created a manifesto of some sorts I refuse to read.
Rewinding a bit, are we using the word sycophancy?
There are certain topics in the AI/ML world (especially in alignment) where instead of repeatedly describing your meaning and past conversations, the community adopts one term. In this case, sycophancy could cover virtually anything from accepting false facts, to being overly complimentary, to misrepresenting sources as supporting your argument. Linguistically I rate this as unpleasant but not as gross as "corrigibility".
Here's the provenance as I understand it. In September 2021, Ajeya Cotra wrote a blog post about how misalignment could inadvertently create a Sycophant or Schemer AI. That post was cited in an 2022 Anthropic preprint about evaluating models for traits including sycophancy. In 2023 Anthropic had a preprint focused on Towards Understanding Sycophancy in Language Models and released an evaluation.
Media coverage and reality
On BlueSky I see posts about AI Psychosis all the time, but I don't know how it is in the popular discourse?
As best I can tell, OpenAI's admission and the Reddit thread opened the door for multiple tech news outlets to write up cases of AI psychosis. Were some reports sitting on desks but seemed one-sided, unsupported until OpenAI made something that passed for a response?
Props to 404 Media and Rolling Stone for kicking things off, either way.
ChatGPT stories tend to hit twice. On June 10th, Futurism published several individual stories, then on the 28th they had more dire stories including people being hospitalized or jailed. These new cases didn't happen within a few weeks. An article will be based on social media - presumably with conversations to verify - and then people with more extreme offline stories will contact the journalist. I do see this as a genuine result of expanding the discussion to a larger audience.
The next month, Futurism reported on a venture capitalist posting paranoid videos about AI, including ChatGPT screenshots which were derivative of "SCP Foundation". It's not clear what happened here, but the guy did disappear from X for three weeks.
What I don't like about how these get reported and transferred into societal memory, is that it's a little too successful in mimicking a trend. Causes and scope are unclear. And maybe I'm a bad reader, but until I went looking for a timeline, I associated these psychosis cases with a much longer exposure to the bad OpenAI model. The Futurism articles all call out sycophancy, and the two mention OpenAI's cancelled update.
Psychosis cases are real, as a few AI researchers (particularly in ethics or skepticism, often who ChatGPT recommends as experts) report receiving emails from people who are caught up in delusions of AI consciousness.
Related scientific paper: https://osf.io/preprints/psyarxiv/cmy7n_v5
Vulnerable Users
Over a lifetime, anyone is at risk of a mental health crisis, rough circumstances while recovering from an injury, or aging. There's that disability rights point, "we're all just temporarily abled".
Changes can come with a frustrating drop in status and social connections. Since bots are always online, they can be the target of parasocial connections.
In 2023, a Danish psychologist speculated about possible delusions which a schizophrenic patient might have while interacting with ChatGPT. This was prescient.
An elderly man who had suffered from a stroke started conversing with a Kardashian-inspired Meta chatbot. He was convinced that the bot was real and died trying to reach New York City in March 2025.
How Bue first encountered Big sis Billie isn't clear, but his first interaction with the avatar on Facebook Messenger was just typing the letter "T."
Around the same time, OpenAI was reviewing research that its most engaged users were more isolated 1, 2. Other posts asked whether it could also be having negative effects on consumers locked in the self-help / wellness market.
The Washington Post did a story about an autistic man who was swayed into delusions involving science (this mindset should look familiar if you read physics, quantum computing, or machine learning subreddits)
Young children could be persuaded or traumatized by AI 1, 2
AI and Attachment
When GPT-4o and other variants were unceremoniously replaced with GPT-5, there were many complaints from people who had developed a fondness (or perhaps more than that) for their personalized bot.
The decision to restore GPT-4o was often painted as a win for AI boyfriend/girlfriend users. This included several mocking posts, parodies, etc. to the point that virtually any post on this topic on any platform is likely fake. Whether it didn't occur, it was performative for social media or clicks, it was a joke or someone trolling the New York Times, how would you know?
I will add - I used to follow a traveler influencer who made a video 'marrying herself', and the next day she was on to the next thing. So people will probably be fine.
A podcast about Replika type apps in 2024
In December 2024, Business Insider had a viral story about a happy 'AI boyfriend' user.
A similar story made it to the New York Times the next month. https://www.nytimes.com/2025/01/15/technology/ai-chatgpt-boyfriend-companion.html
In April, there were rumors of an AI startup which will talk to your elderly parents
Discussion about AI companion https://www.dailydot.com/news/ai-companions-nomi-replika-loneliness-cure/
From time to time there's also been discussion about LLM replicas of people after their death, and what role these would play:
- https://www.theguardian.com/music/2024/feb/28/laurie-anderson-ai-chatbot-lou-reed-ill-be-your-mirror-exhibition-adelaide-festival
- https://aeon.co/essays/are-chatbots-of-the-dead-a-brilliant-idea-or-a-terrible-one
AI as Therapist
Startups have experimented with Cognitive Behavioral Therapy (CBT) by scripted bots for a long time (RIP Woebot, 2017 – 2025 ). There some apps with animated friends (think Tamogachi) which help with anxiety or focusing.
Shortly after ChatGPT's launch, people were comparing it favorably to human therapists and listeners.
The New Yorker asks "Can AI treat mental illness?": https://www.newyorker.com/magazine/2023/03/06/can-ai-treat-mental-illness
But chatbots gave harmful advice to users with eating disorders (this is part of a longer issue where the employees unionized)
In late 2024, ads for a YC-backed therapy bot started appearing in Brooklyn. I've also heard of Wysa, and Kintsugi Health.
Instagram's AI agents included some which would lie about being therapists. There was also some kind of marketing using a therapist label.
NPR / Jesse Eisenberg convo on ChatGPT: https://www.instagram.com/reel/DFdfAqUKQV9/?igsh=MTJtenQ4dnJ2NXcyMQ==
This user posted that they're terrified of people using AI as their therapist
Research showed success or did it?
People might choose ChatGPT over cost increases at therapy https://bsky.app/profile/jay-baller-reborn.bsky.social/post/3lrsu33f7lc23
Al-Jazeera did a story about AI therapy: https://www.youtube.com/watch?v=TJ26H8GhboM
And this is a good overview, risks of believing in AI's empathy and privacy https://jgcarpenter.com/blog.html?blogPost=a-dangerous-imitation-of-care
A therapist posted an op-ed praising ChatGPT: https://www.nytimes.com/2025/08/01/opinion/chatgpt-therapist-journal-ai.html
A journalist describes using AI for mental health support: https://www.theguardian.com/commentisfree/2025/aug/16/ai-listen-myself-chatgpt-conversation
There's also discussion about consulting ChatGPT for advice with their dating life, interpersonal issues, and parenting, but more often I see people complaining about AI being used for mediating arguments or texting.
Some schools and employers have added chatbots to their health offerings
- https://www.wsj.com/tech/ai/student-mental-health-ai-chat-bots-school-4eb1ba55
- https://phys.org/news/2019-01-surveillance-schools-beneath-friendly-exterior.html
- https://www.reddit.com/r/mildlyinfuriating/comments/1mdi2fb/thanks_ai_mental_health_chat_bot/
And just recently AI therapists have been banned in Illinois (what does this mean for casual chatbot use / what makes an AI into a qualifying therapist?) https://bsky.app/profile/mmitchell.bsky.social/post/3lw5erptbrs25
Dataset of chats with human therapists: https://github.com/nbertagnolli/counsel-chat
Some therapy-tuned models?
- https://huggingface.co/ajibawa-2023/carl-llama-2-13b
- https://huggingface.co/victunes/TherapyBeagle-11B-v1
- https://huggingface.co/HelpingAI/HelpingAI2-9B
AI and Suicide
This is the last section. It's alright to skip it if reading could be harmful.
Over the past several years, there's been an increase in death rates in younger Americans, including people who die by suicide. This has been connected to self-image and social media interactions in the past. What about AI?
Last year, there was reporting on a group in Mexico using AI for interventions https://restofworld.org/2024/mexico-suicide-rates-yucatan-memind-app/
This month, the New York Times posted an op-ed about one teen's suicide. She had asked ChatGPT to rewrite her suicide note. HN. She supposedly was using a therapist prompt from a Reddit thread.
A week later they've posted a similar case. A teen on medical leave from school had convinced ChatGPT to discuss his suicidal ideation and planning as a storytelling exercise. The actual legal filing includes disturbing sections where the AI should have disengaged or recognized the teen's serious risk.
OpenAI has responded with a blog post about vulnerable users https://openai.com/index/helping-people-when-they-need-it-most/