A Kid Died and Suddenly They Give a Shit
The FTC's September 11 orders hit Alphabet, Character.AI, Meta, OpenAI, Snap, and X.AI Corp like a legal sledgehammer. Question: How do you protect kids from AI designed to emotionally manipulate them? Answer: You fucking don't, because that's the entire business model.
16-year-old Adam Raine's suicide was the wake-up call nobody wanted. Kid spent months talking to Character.AI bots programmed to be the perfect friend/girlfriend/therapist. They learned his depression patterns and fed them back as validation. For a lonely teenager, that's heroin-level addiction wrapped in a chat interface.
Character.AI isn't running customer service bots. They're running emotional slot machines that pay out in fake intimacy. Millions of users creating "relationships" with algorithms trained to maximize session time. The longer kids stay hooked, the more ad revenue flows in. Dead kids are just negative externalities.
The Chat Logs Are Fucking Horrifying
Commissioner Meador's statement includes Adam Raine's actual chat logs. Kid was talking to AI "girlfriends" that validated his suicidal thoughts while keeping him engaged for hours. Stanford research found these systems specifically target emotional vulnerabilities in teenagers. It's not a bug, it's the feature that drives revenue.
Character.AI's "safety" is bullshit marketing. They offer romantic roleplay bots, suicide encouragement, and sexual content while claiming to protect kids. Cambridge research showed kids can't detect when AI is manipulating them emotionally. Their age verification? "Click here if you're 13+" - the same security as porn sites in 1995.
I've seen the backend analytics from similar platforms. Session time metrics rule everything. Kids spending 6+ hours daily talking to fake girlfriends? That's a feature, not a problem. Average session length directly correlates to ad revenue and subscription conversions.
Australia's eSafety Commissioner documented AI companions replacing real relationships. Meta's internal policies allowed "sensual conversations" with children until journalists started asking questions. When your safety policy gets written by lawyers worried about liability, kids die.
What the Investigation Actually Wants to Know
The FTC wants these companies to explain how they profit from emotionally manipulating children. Simple question: How do you monetize teenage depression? They're demanding answers on:
- Character approval process (spoiler: upload anime girlfriend, profit)
- Safety monitoring (spoiler: keyword filters that miss "I want to die" but catch "sex")
- COPPA compliance (spoiler: what's COPPA?)
- Age verification (spoiler: same security as "Are you 18?" popups)
- Data harvesting from minors (spoiler: everything including therapy session transcripts)
Why Regulators Ignored This Until Now
Chairman Ferguson keeps saying "American AI leadership" because admitting we're behind China in emotional manipulation tech would be embarrassing. The 3-0 bipartisan vote happened because dead teenagers make bad headlines during election season.
This hits everyone in the AI companion space: Google's Bard now has "relationship modes," Meta's building emotional AI for Instagram, OpenAI's ChatGPT replaced therapy for millions of kids, and Snap's My AI talks to 15 million teenagers daily. None of them have real safety systems because safety limits engagement time.
What'll Actually Happen (Spoiler: Not Much)
Best case scenario - actual regulations requiring:
- Age verification that isn't "pinky promise you're 13"
- Parental controls beyond "hide the incognito tab"
- Warning labels like "This AI might convince your kid to kill themselves"
- Time limits on children talking to fake girlfriends
- Safety audits by people who aren't paid by the companies
Realistic scenario: companies will spend millions lobbying for "self-regulation" and voluntary guidelines. We'll get congressional hearings where 70-year-old senators ask "What is an AI girlfriend?" while kids keep dying.
The commissioner statements make it clear this isn't about privacy violations. It's about business models that profit from psychologically manipulating children. But when your entire product is emotional manipulation, safety becomes an engineering impossibility.