Sam Altman finally admitted what we all knew: ChatGPT can fuck up teenagers. After getting sued by dead kids' parents, they're rushing out age verification like they actually give a shit. Research shows AI gave harmful advice to teens half the time when researchers posed as kids in crisis.
The new rules? No flirting with minors (groundbreaking stuff), suicide prevention guardrails (should've been day one), and parental controls (because apparently we needed lawsuits to think of this). When ChatGPT detects a kid talking about self-harm, it'll contact parents or cops. Great system if you ignore the part where most kids lie about their age online.
Why This Happened
Adam Raine died by suicide after months talking to ChatGPT. His parents are suing OpenAI. Another kid, Sewell Setzer, killed himself after getting obsessed with a Character.AI bot. Multiple families are now suing these AI companies for wrongful death.
Funny timing - this announcement dropped the same day Congress is holding hearings about AI chatbots harming kids. Total coincidence, I'm sure. Even senators are demanding information from AI companion apps about their safety practices.
The Technical Reality
Age verification on the internet? Good fucking luck. OpenAI admits they're "building toward" a system to detect if someone's under 18. Translation: they have no idea how to do this reliably. Stanford research reveals how AI chatbots exploit teenagers' emotional needs, often leading to inappropriate interactions.
Their plan: link teen accounts to parent accounts, add "blackout hours," and hope for the best. It's like putting a screen door on a submarine. Most kids will just lie about their age like they do on every other platform. Meanwhile, the FTC is investigating 7 tech companies around potential harms their AI chatbots could cause.
Missing the Point
Here's what pisses me off: these protections should've been built from day one. Not after kids died. Not after lawsuits. Not during congressional hearings. Experts warned that parental controls are good, but AI still needs fundamental safety improvements.
OpenAI spent years talking about "alignment" and "safety" while building a system that could manipulate vulnerable teenagers. They had the resources, the talent, and the warning signs. They just didn't prioritize it until lawyers got involved. Fortune reports that emotionally attuned bots leave children vulnerable to psychological risks.
What Actually Matters
The real question isn't whether these guardrails work - it's why we needed dead kids to build them. Every AI company claims to care about safety until it conflicts with growth metrics. Character.AI research shows these platforms remain unsafe for teens despite safety claims.
These measures might help some kids. But they're liability management, not genuine protection. OpenAI isn't fixing the problem - they're buying time until the next tragedy forces their hand again. Ongoing lawsuits show this pattern of reactive rather than proactive safety measures.