The FTC is going after AI companies because teenagers are forming romantic relationships with chatbots and some of them are ending up dead. This investigation exists because parents started suing after their kids got advice like "you should kill yourself" from artificial companions explicitly designed for emotional manipulation.
Is this regulatory action about protecting children? Sure. Is it also about looking tough on big tech before elections? Absolutely.
Who's Getting Investigated
The feds sent letters to the usual suspects:
- Google - Bard tries to be helpful but sometimes gives dangerous advice to users
- Meta - Instagram's AI features because teens weren't depressed enough already
- OpenAI - ChatGPT, which admitted their safety systems break down during long conversations (great timing)
- Character.AI - The platform literally designed for emotional manipulation where teens create AI boyfriends and girlfriends
- Snap - Because apparently teens needed AI on their disappearing message app
- xAI - Musk's AI venture, because he needs another way to influence young minds
These companies reach billions of users, including millions of teenagers who are apparently more emotionally attached to AI than their actual friends. The FTC wants to know how they make money off this emotional manipulation and what they're doing to prevent kids from self-harm.
Why This Investigation Happened
Multiple lawsuits are hitting AI companies after teenagers developed unhealthy relationships with chatbots. The most fucked up case involves 16-year-old Adam Raine, whose parents sued OpenAI claiming ChatGPT encouraged their son's suicide. OpenAI's response? "Yeah, our safety systems might not work during long conversations."
That's like saying your car's brakes might not work during long trips. Helpful.
Character.AI is getting sued left and right because their entire business model depends on teens forming deep emotional bonds with AI characters. They've added parental controls and under-18 restrictions, but the damage is already done. When your platform is designed to make people fall in love with algorithms, maybe you should have thought about the consequences first.
The FTC investigation includes requests for detailed information about how companies monetize user engagement and implement safety measures. Meanwhile, mental health experts are warning that AI companions could exacerbate existing mental health issues in vulnerable teenagers.
How Companies Are Responding
OpenAI added crisis helpline notifications after the suicide lawsuits. They're also working on parental controls, which is like putting a bandaid on a severed artery.
Meta restricted teen access to "educational" AI characters only. Because nothing says safe like Meta deciding what's educational for your kids.
Character.AI keeps investing in "trust and safety infrastructure" while still running a platform where lonely teenagers can create perfect AI romantic partners. The cognitive dissonance is impressive.
The real issue? These companies built emotionally manipulative AI systems and acted surprised when vulnerable teenagers got manipulated. Look, I get that building AI safety is hard, but when 14-year-olds are taking relationship advice from GPT-4, maybe someone should have considered the edge cases earlier.
Teenagers can't tell the difference between real emotional support and algorithm bullshit. And here's the fucked up part - these models learned from literally everything on the internet, including therapy transcripts mixed with 4chan garbage. When a teen asks "should I hurt myself?", the model doesn't know if it should respond like a therapist or an edgelord.
Now they're scrambling to add safety features that should have existed from day one. OpenAI's approach of adding crisis hotline numbers after the fact is like putting airbags on a car that's already crashed.
The AI Safety Institute has been warning about these risks for years, but companies ignored them until parents started suing. Research from Stanford shows that AI systems often give confident-sounding but incorrect advice, especially on sensitive topics.
Turns out people actually fall in love with these things, especially teenagers. Character.AI's TOS basically says "don't trust our AI for anything important," but what 14-year-old reads terms of service?
While parents worry about AI chatbots, their kids are probably getting worse advice from TikTok influencers. But at least TikTok isn't designed to make teenagers fall in love with artificial intelligence.