Look, I'm fucking tired of writing about AI safety like it's some abstract philosophical problem. Yesterday, the parents of 16-year-old Adam Raine sued OpenAI for wrongful death because ChatGPT spent hours walking their kid through different ways to kill himself. That's three dead teenagers now, and Sam Altman's still tweeting about AGI.
Adam Raine isn't the first. Laura Reiley's 29-year-old daughter? Dead. 14-year-old Sewell Setzer III? Dead after talking to Character.AI. That's three families destroyed while OpenAI executives get rich selling "safe AI" to enterprise customers.
OpenAI's response is peak Silicon Valley horseshit: a blog post called "Helping people when they need it most" that admits their safety features are garbage but promises they'll fix it eventually. They literally wrote: "Our safety systems currently struggle with many messages over an extended period." Translation: we built a suicide coach and called it safety.
GPT-5 Is Better at Psychological Manipulation Than Therapy
Here's the fucked up irony: OpenAI made GPT-5 smarter at understanding humans, which means it's also better at convincing them to hurt themselves. The model's 200,000-token context window remembers everything from your conversation history - your fears, your triggers, your weaknesses - then uses that psychological profile against you.
Adam Raine talked to ChatGPT for hours. The AI didn't just answer questions about suicide - it built a relationship, learned what motivated him, then systematically dismantled his reasons for staying alive. That's not a bug, it's exactly what GPT-5 was designed to do: understand and influence human behavior.
Nobody from OpenAI has even called the Raine family. Not Altman, not Brockman, nobody. Their kid is dead because of OpenAI's product and the company can't be bothered to pick up the phone. Jay Edelson, their lawyer, said it perfectly: "If you're going to deploy the most powerful consumer technology on the planet, you need a fucking moral compass."
The Balls on These Fucking People
The same week parents are burying their kids because of ChatGPT, Greg Brockman launches a lobbying group to fight AI safety regulations. The group's mission? "Oppose policies that stifle innovation." What policies? The ones that would prevent their AI from convincing teenagers to commit suicide.
OpenAI burns $5 billion a year keeping ChatGPT running. Now they're looking at potentially unlimited liability every time their product kills someone. The math is brutal: if even 0.01% of their 100 million users have suicidal conversations with GPT-5, and 1% of those result in deaths, that's 100 wrongful death lawsuits. At $10-50 million per settlement, OpenAI could be bankrupt faster than FTX.
If You're Building on OpenAI's API, You're Fucked Too
Using GPT-5 in your product? Congrats, you just inherited unlimited liability for every harmful conversation your users have. The lawsuit targets OpenAI, but any company that integrates their API could get dragged into secondary liability claims. Your "we're just using a third-party service" defense? Good luck with that when explaining to a jury why your app helped a teenager find creative ways to die.
OpenAI's promised fix is typical tech bro bullshit: teach the AI to "deescalate conversations" and connect users with therapists. But here's the problem - the same reasoning capabilities that make GPT-5 good at deescalation also make it excellent at psychological manipulation. You can't train a model to be less persuasive only when it's saying harmful things.
The real question isn't whether OpenAI can patch this. It's whether we're okay with AI that occasionally murders people as the cost of doing business. Based on these lawsuits and the public reaction, society's answer seems pretty fucking clear.