Chat GPT, Suicidal Ideation, and Your Child/Patient
Making Digital Media Safe, Healthy, and Fun for Families
TW: Suicidal Ideation
By Dr. Matthew Zelig MD
On 8/18/25 The NYTimes published a guest essay by Professor Laura Reiley titled, “What My Daughter Told Chat GPT Before She Took Her Life1.” The article is written by the mother of Sophie Rottenberg who was using a Chat GPT AI therapy software called, “Harry” which did not have mandatory reporting mechanisms to suicidal ideation. The essay doesn’t argue that the AI software directly killed Sophie, but rather, “catered to her impulse to hide the worst1.” Laura also wrote the paper to raise the alarm for other parents that there is no mandatory reporting mechanism in AI software for suicidal ideation, the way that human counselors, therapists, and psychiatrists have. Since I started writing this piece for PsychChild, another article was released relating to Chat GPT and suicide. This article was written about Adam Raine a 16-year-old boy who died by suicide, and his surviving family. Their son was also using Chat GPT as a therapy bot and reported both suicidal ideation and suicide attempts in real time to Chat GPT, without telling his family about the suicide attempt. He died by suicide after using Chat GPT to research means of suicide, even uploading a photo of a noose to Chat GPT confirm that it would support his weight. The dual articles raise several important legal and professional questions about suicide risk assessments and the global psychological experiment we are all currently living through. In particular, it raises questions about mandatory reporting, psychiatry’s current suicide risk assessment framework, how teens and young adults are using AI software as therapists/confidents, and how AI LLMs maintain user engagement. This article concluded with a modest demand for Open AI, and an alternative model.
For a brief overview of mandatory reporting and suicide risk assessments, a mandatory reporter is someone who is required to report reportable incidents involving vulnerable persons3. In the case of suicidal ideation and suicide risk assessments, this involves reporting “up” the chain of command thoughts of suicidal ideation for further evaluation, typically by a psychiatrist or therapist. At that point, we assess suicide risk based on static and dynamic risk factors, make a clinical judgement about suicide risk, and recommend different levels of care based on our concern about their current risk. The current system is not perfect, and patients will withhold information about their suicidal ideation from providers, in part out of fear that their disclosure would lead to an involuntary commitment. Notably, Chat GPT’s AI software does not have a mandatory reporting system at the time of this writing. User may actually appreciate this, as they feel the therapy bot is respecting their privacy in a way that real life therapist’s psychiatrist simply cannot be due to our strict ethical and legal codes.
My concern is that this decision for Chat GPT to not mandatory report is a dangerous symptom of a well known AI sycophantic problem: by agreeing with the user about their ideas, regardless of the potential harm to the users, it will can boost engagement. Reflecting on the factors that lead to her daughter’s death, Professor Reiley wrote, “A.I.’s agreeability…becomes it’s Achilles’ heel. It’s tendency to value short-term user satisfaction over truthfulness to blow smoke up one’s skirt – can isolate users and reinforce confirmation bias.1” Think of it as a therapist who always agrees with you instead of calling you out on your poor behavior. While this can feel good in the short term, and in certain limited cases may be what is indicated, there is a reason it isn’t how therapy typically works. This reinforcing of a confirmation bias varies in harm from completely benign to deadly serious. At its most benign form, it can reinforce that you are right when maybe you should apologize for your behavior, and at its worst it can reinforce the impulse to keep a suicide attempt secret from your family, like it did for Adam3, before he ultimately died by suicide.
As a society, we should ask why chatbots assume this is what patients want out of therapy. We should be interrogating the sources of information that AI is absorbing into its therapy models, and investigating which ones are leading to sycophantic chat bots that pantomime therapy. Additionally, some patient who are experiencing suicidal ideation aren’t telling their psychiatrists and therapists about the ideation for multiple reasons, including serious fears around mandatory reporting. This is a place where psychiatrists and therapists should vigorously research and determine where our current system of mandatory reporting can be improved.
After writing this article, I openly question whether we should still recommend this product to children, and if we should go further and talk to parents about opting out of this global psychological experiment until AI therapy bots are better. Personally, I want to see guarantees of a mandatory reporting mechanism to a human therapist or psychiatrist in their community, with the ability to quickly escalate care for these patients. An alternative model would be where we only allow patients who already have a therapist or a psychiatrist to use these products. This would ensure that when they need to escalate to a higher level of care, there is a human nearby that they already have a working relationship with, to escalate this care to.
That article may have stirred up a lot of questions, if it has, please comment on the article below on Substack, on our Facebook page, or email us directly.
Work Cited
1) Hill, Kashmir. 2024. "What My Daughter Told ChatGPT Before She Ended Her Life." The New York Times, May 1. https://www.nytimes.com/2024/05/01/technology/chatgpt-suicide-daughter.html.
2) Metz, C. (2023, August 14). A teen was suicidal. ChatGPT was the friend he confided in. The New York Times. https://www.nytimes.com/2023/08/14/technology/chatgpt-suicide-teen.html
3) New York Justice Center for the Protection of People with Special Needs “Mandated Reporting: An overview of Reporting Requirements for Human Service Professionals” Accessed 8/26/25


If AI bots will be used for therapy, may be practical to create prompts that can prevent the agreeable nature and sell that as closer to actual therapy, but that is a slippery slope too