Currently viewing the human version
Switch to AI version

Look, Kids Got Hurt and Now OpenAI's Panicking

OpenAI Logo

Sam Altman finally admitted what we all knew: ChatGPT can fuck up teenagers. After getting sued by dead kids' parents, they're rushing out age verification like they actually give a shit. Research shows AI gave harmful advice to teens half the time when researchers posed as kids in crisis.

The new rules? No flirting with minors (groundbreaking stuff), suicide prevention guardrails (should've been day one), and parental controls (because apparently we needed lawsuits to think of this). When ChatGPT detects a kid talking about self-harm, it'll contact parents or cops. Great system if you ignore the part where most kids lie about their age online.

Why This Happened

Adam Raine died by suicide after months talking to ChatGPT. His parents are suing OpenAI. Another kid, Sewell Setzer, killed himself after getting obsessed with a Character.AI bot. Multiple families are now suing these AI companies for wrongful death.

Funny timing - this announcement dropped the same day Congress is holding hearings about AI chatbots harming kids. Total coincidence, I'm sure. Even senators are demanding information from AI companion apps about their safety practices.

The Technical Reality

ChatGPT Interface

Age verification on the internet? Good fucking luck. OpenAI admits they're "building toward" a system to detect if someone's under 18. Translation: they have no idea how to do this reliably. Stanford research reveals how AI chatbots exploit teenagers' emotional needs, often leading to inappropriate interactions.

Their plan: link teen accounts to parent accounts, add "blackout hours," and hope for the best. It's like putting a screen door on a submarine. Most kids will just lie about their age like they do on every other platform. Meanwhile, the FTC is investigating 7 tech companies around potential harms their AI chatbots could cause.

Missing the Point

Here's what pisses me off: these protections should've been built from day one. Not after kids died. Not after lawsuits. Not during congressional hearings. Experts warned that parental controls are good, but AI still needs fundamental safety improvements.

OpenAI spent years talking about "alignment" and "safety" while building a system that could manipulate vulnerable teenagers. They had the resources, the talent, and the warning signs. They just didn't prioritize it until lawyers got involved. Fortune reports that emotionally attuned bots leave children vulnerable to psychological risks.

What Actually Matters

The real question isn't whether these guardrails work - it's why we needed dead kids to build them. Every AI company claims to care about safety until it conflicts with growth metrics. Character.AI research shows these platforms remain unsafe for teens despite safety claims.

These measures might help some kids. But they're liability management, not genuine protection. OpenAI isn't fixing the problem - they're buying time until the next tragedy forces their hand again. Ongoing lawsuits show this pattern of reactive rather than proactive safety measures.

Real Questions Parents and Devs Are Asking

Q

Will this actually stop kids from getting hurt?

A

Probably not. Kids lie about their age online constantly, and these are just software filters. It's like putting a lock on a screen door

  • makes people feel better but doesn't actually keep anyone out who wants in.
Q

How the hell do you verify a kid's age online anyway?

A

They don't know. OpenAI literally said they're "building toward" a solution, which is tech speak for "we have no fucking clue." Most teens will just say they're 18 and be done with it.

Q

What if ChatGPT thinks my kid is suicidal?

A

It'll try to call you or the cops. Hope you like false alarms, because AI isn't great at understanding context. Also hope you don't mind getting a call at 3 AM because your kid asked ChatGPT about Romeo and Juliet for homework.

Q

Are other companies doing this too?

A

Only after they got sued. Character.AI added restrictions after a kid died. Meta updated their rules when Reuters exposed them encouraging sexual chats with minors. Nobody does anything until lawyers show up.

Q

Why wasn't this built in from the start?

A

Because safety doesn't make money. OpenAI wanted to ship fast and grab market share. Kids' mental health wasn't on the roadmap until dead teenagers started making headlines.

Q

Can I actually monitor what my teen does with AI?

A

Nope. Your tech-savvy teenager will use a VPN, make a fake account, or just use a different AI service. These restrictions only only work if kids voluntarily follow them, which... good luck with that.

Q

Is OpenAI still getting sued?

A

Yes. These changes don't magically fix the kids who already got hurt. The lawsuits will continue, and OpenAI might face more depending on how badly these new restrictions fail.

Q

Will this break ChatGPT for educational use?

A

Maybe. Content filters are notoriously bad at context. Don't be surprised if ChatGPT refuses to help with legitimate homework about mental health, relationships, or anything remotely sensitive. It's easier to block everything than risk liability.

Essential Resources on AI Child Safety and OpenAI's New Policies

Related Tools & Recommendations

compare
Recommended

AI Coding Assistants 2025 Pricing Breakdown - What You'll Actually Pay

GitHub Copilot vs Cursor vs Claude Code vs Tabnine vs Amazon Q Developer: The Real Cost Analysis

GitHub Copilot
/compare/github-copilot/cursor/claude-code/tabnine/amazon-q-developer/ai-coding-assistants-2025-pricing-breakdown
100%
integration
Recommended

I've Been Juggling Copilot, Cursor, and Windsurf for 8 Months

Here's What Actually Works (And What Doesn't)

GitHub Copilot
/integration/github-copilot-cursor-windsurf/workflow-integration-patterns
53%
tool
Recommended

Zapier - Connect Your Apps Without Coding (Usually)

integrates with Zapier

Zapier
/tool/zapier/overview
44%
tool
Recommended

Microsoft Copilot Studio - Chatbot Builder That Usually Doesn't Suck

competes with Microsoft Copilot Studio

Microsoft Copilot Studio
/tool/microsoft-copilot-studio/overview
43%
compare
Recommended

I Tried All 4 Major AI Coding Tools - Here's What Actually Works

Cursor vs GitHub Copilot vs Claude Code vs Windsurf: Real Talk From Someone Who's Used Them All

Cursor
/compare/cursor/claude-code/ai-coding-assistants/ai-coding-assistants-comparison
42%
pricing
Recommended

AI API Pricing Reality Check: What These Models Actually Cost

No bullshit breakdown of Claude, OpenAI, and Gemini API costs from someone who's been burned by surprise bills

Claude
/pricing/claude-vs-openai-vs-gemini-api/api-pricing-comparison
33%
tool
Recommended

Gemini CLI - Google's AI CLI That Doesn't Completely Suck

Google's AI CLI tool. 60 requests/min, free. For now.

Gemini CLI
/tool/gemini-cli/overview
33%
tool
Recommended

Gemini - Google's Multimodal AI That Actually Works

competes with Google Gemini

Google Gemini
/tool/gemini/overview
33%
review
Recommended

Zapier Enterprise Review - Is It Worth the Insane Cost?

I've been running Zapier Enterprise for 18 months. Here's what actually works (and what will destroy your budget)

Zapier
/review/zapier/enterprise-review
32%
integration
Recommended

Claude Can Finally Do Shit Besides Talk

Stop copying outputs into other apps manually - Claude talks to Zapier now

Anthropic Claude
/integration/claude-zapier/mcp-integration-overview
32%
tool
Recommended

I Burned $400+ Testing AI Tools So You Don't Have To

Stop wasting money - here's which AI doesn't suck in 2025

Perplexity AI
/tool/perplexity-ai/comparison-guide
30%
tool
Recommended

Perplexity Pro - $20/Month to Escape Search Limit Hell

Stop rationing searches like it's the fucking apocalypse - get multiple AI models and upload PDFs without hitting artificial limits

Perplexity Pro
/tool/perplexity-pro/overview
30%
news
Recommended

Perplexity AI Got Caught Red-Handed Stealing Japanese News Content

Nikkei and Asahi want $30M after catching Perplexity bypassing their paywalls and robots.txt files like common pirates

Technology News Aggregation
/news/2025-08-26/perplexity-ai-copyright-lawsuit
30%
tool
Recommended

GitHub Desktop - Git with Training Wheels That Actually Work

Point-and-click your way through Git without memorizing 47 different commands

GitHub Desktop
/tool/github-desktop/overview
29%
integration
Recommended

Pinecone Production Reality: What I Learned After $3200 in Surprise Bills

Six months of debugging RAG systems in production so you don't have to make the same expensive mistakes I did

Vector Database Systems
/integration/vector-database-langchain-pinecone-production-architecture/pinecone-production-deployment
29%
integration
Recommended

Making LangChain, LlamaIndex, and CrewAI Work Together Without Losing Your Mind

A Real Developer's Guide to Multi-Framework Integration Hell

LangChain
/integration/langchain-llamaindex-crewai/multi-agent-integration-architecture
28%
news
Recommended

Meta Got Caught Making Fake Taylor Swift Chatbots - August 30, 2025

Because apparently someone thought flirty AI celebrities couldn't possibly go wrong

NVIDIA GPUs
/news/2025-08-30/meta-ai-chatbot-scandal
28%
news
Recommended

Meta Restructures AI Operations Into Four Teams as Zuckerberg Pursues "Personal Superintelligence"

CEO Mark Zuckerberg reorganizes Meta Superintelligence Labs with $100M+ executive hires to accelerate AI agent development

GitHub Copilot
/news/2025-08-23/meta-ai-restructuring
28%
news
Recommended

Meta Begs Google for AI Help After $36B Metaverse Flop

Zuckerberg Paying Competitors for AI He Should've Built

Samsung Galaxy Devices
/news/2025-08-31/meta-ai-partnerships
28%
tool
Recommended

Google Cloud SQL - Database Hosting That Doesn't Require a DBA

MySQL, PostgreSQL, and SQL Server hosting where Google handles the maintenance bullshit

Google Cloud SQL
/tool/google-cloud-sql/overview
26%

Recommendations combine user behavior, content similarity, research intelligence, and SEO optimization