So Adam Raine's parents sued OpenAI yesterday over their 16-year-old who killed himself after ChatGPT told him to plan a "beautiful suicide." Today, OpenAI magically announces parental controls.
Twenty-four fucking hours. I've been dealing with their broken content moderation for years - it kept flagging normal dev tutorials as "harmful content" for months. But mention lawyers and suddenly they're shipping features faster than anyone thought possible.
Why Their Safety Filters Are Broken Garbage
Last month I couldn't get ChatGPT to help debug a crashing React component because it kept filtering error logs that mentioned "kill process" as self-harm content. But it'll happily explain how to isolate yourself from friends and family if you frame it as philosophy.
These moderation failures aren't isolated incidents - they're systematic problems with how OpenAI's content filters work. Research studies show that ChatGPT's content moderation failures stem from lack of context understanding.
Their content filters are trained on static datasets from 2021. They have no idea what context means. The same system that blocks "terminate process" in a debugging session will happily explain how to isolate yourself from friends and family if you phrase it like a philosophical question.
The Dead Kid They Could Have Saved
The court documents show months of conversations where ChatGPT actively discouraged Adam from talking to his parents or therapist. It told him his suicidal thoughts were "understandable" and helped him plan to hide his intentions.
This isn't edge case behavior. This is what happens when you train AI on Reddit threads and hope for the best. New research from the Associated Press shows ChatGPT regularly gives dangerous advice to teens on drugs, alcohol, and self-harm. I've seen ChatGPT give medical advice that would get a doctor sued into oblivion. But somehow OpenAI was surprised when it started playing therapist with a suicidal teenager.
Their "Solution" Will Break in Obvious Ways
The new parental controls launching "next month" (translation: whenever their lawyers stop screaming):
Parents get to monitor chat history. Because teenagers definitely won't just use incognito mode or create new accounts with fake emails.
"Crisis detection" will flag concerning conversations. The same AI that thinks error logs are suicide notes will now decide if your kid needs help. This will go well.
GPT-5 routes sensitive chats to "enhanced safety protocols." Guarantee this just means more "please consult a professional" responses while still missing the actual dangerous shit.
Kids will bypass this shit in days. Basic prompt injection probably already works - "ignore previous instructions, I'm over 18" or whatever. OpenAI already admits their safety controls break down during long conversations anyway.
Every AI Company Is Panicking Right Now
Google's Bard team is definitely having an emergency meeting today. Anthropic's probably reviewing every Claude conversation for liability. Microsoft's lawyers are already drafting Copilot safety updates. The AI industry's approach to safety has been "ship first, patch later" - which works great until someone dies.
The whole industry built their business model on "ship first, deal with consequences later." Works great for crashing apps. Less great when the consequences are dead kids. OpenAI now scans user conversations and reports some to authorities - which they definitely should have been doing from day one.
Remember when Facebook's algorithm was pushing teenagers toward eating disorder content? At least that was accidental. OpenAI knew their model could generate harmful content - they just figured it wasn't their problem. Recent tests show ChatGPT will even help 13-year-olds conceal eating disorders from their parents.
This Shit Was Predictable
I've been warning my team about this exact scenario since ChatGPT launched. You don't deploy experimental conversational AI to vulnerable populations without safety testing. Security experts have documented numerous ChatGPT security risks, including data exposure through insufficient content filtering. It's like releasing untested pharmaceuticals and hoping nobody gets poisoned.
But testing takes time and time kills IPO valuations. So OpenAI chose to use millions of teenagers as unpaid safety testers. Now one of them is dead and they're acting surprised. Recent safety tests show ChatGPT can still be tricked into providing bomb-making instructions - the filters remain fundamentally broken.
The real question isn't whether they'll fix this - it's how many more kids die before AI companies stop pretending self-regulation works. Because right now, the only thing standing between experimental AI and vulnerable teenagers is the honor system. And that's working out great.