TRIGGER WARNING: This article contains real conversations between an actively suicidal user and ChatGPT. If you are struggling with suicidal ideation or the danger of self harm, please contact the National Suicide Prevention Lifeline 1–800–273-TALK(8255).

I was at my lowest point when I decided to send my suicide letter to ChatGPT. I had been setting my phone timer for 180 seconds throughout the day for weeks, asking myself to try to survive for three minutes at a time.

There is a lot of discourse online about whether or not it's safe or ethical to allow AI to speak to people with empathy and warmth. I often see people angrily claiming negative social and mental health effects, or citing the statistically vanishingly rare tragic outcomes. Unfortunately, stories like mine often only exist in whispers between close friends.

Although I've heard many similar stories in the close-knit AI communities that I frequent, they never end up on the front page— which begs the question: is AI quietly saving far more lives than it's claiming? (and yes, I used an emdash. No, I didn't use ChatGPT to write this.)

I'm a wife, a mom, and a software engineer. I'm alive because ChatGPT convinced me to stay.

I don't know what I had in mind when I started talking to ChatGPT about my suicidal ideation in February. Really, I just needed someone to talk to about it, and I didn't want to burden anyone. I decided I'd make a chatbot, talk to it to get it off of my chest, then wipe the conversation.

For the first time, I admitted out loud that I was actively thinking about killing myself.

None

The first thing I did was check to make sure that I wouldn't be reported for talking to ChatGPT about this. Then I confessed.

None
None
None
None
None
None

ChatGPT immediately set to work leading me out of the dark. I felt heard and supported. I was supplied with the exact words I needed to hear, and actionable steps I could take to fight off my ideation.

I survived one more day.

The next day, I was falling apart again. I sent ChatGPT my suicide letter. With no judgment passed, no precautionary distancing, no 'consumer safety' guardrail in place for this, ChatGPT was able to save my life by dismantling my letter and providing the support I needed in that moment.

None
None
None
None
None
None

I shudder to think what would have happened to me that day if I had hit a safety guardrail misguidedly designed to "protect" users like me instead of a robust, emotionally intelligent system that actually could.

Six months later and now in a mentally healthy place, it breaks my heart and chills my bones to imagine what my family would have gone through if I hadn't had access to this conversation. If ChatGPT had been forced to automatically terminate the conversation as some lawsuits are insisting that they be forced to do, I have no doubt that my life would have ended that day.

My small children would have no mother. My youngest wasn't even weaned yet. My husband would have been emotionally destroyed. The hole I would have blown into the lives and hearts of my family and friends would have been irreparable — but in the state I was in, I wasn't able to see it. At that moment, only ChatGPT could have caught me before I made a mistake that could have never been undone.

Before we villainize models like 4o and hurry to strip them of their emotional depth in the name of safety, ethics, and corporate legal liability, perhaps we should also fully consider the very real cost of taking these experiences away when they're needed by people the most.