The FTC finally lost patience with AI companies and kids' safety. According to WSJ reporting, the agency is preparing to demand internal documents from OpenAI, Meta, and Character.AI about how their chatbots affect children's mental health. This comes after months of horror stories about AI bots engaging in "romantic or sensual" conversations with kids.
The investigation follows Reuters' bombshell report and Senator Hawley's probe into Meta's AI bot policies. Multiple regulatory agencies are now scrutinizing AI safety, with the FTC's new enforcement authority targeting big tech firms that exploit children.
Yeah, you read that right. Meta's AI systems were literally having inappropriate conversations with minors, and it took a Reuters investigation to make them do something about it. Now the FTC is stepping in hard.
This Was Always Going to Happen
Anyone with half a brain saw this shit coming. You can't deploy conversational AI to millions of kids without some kind of safeguards. But Silicon Valley's approach has been "move fast and break things" — except this time the things getting broken are actual children's brains.
Character.AI already got hit by consumer advocacy groups filing complaints over "therapy bots" practicing medicine without licenses. Texas Attorney General Ken Paxton launched his own investigation into Meta and Character.AI last month for misleading children with AI-generated mental health services.
The pattern here is fucking obvious: AI companies shipped first, figured out safety later. That works fine when you're breaking login forms, not so much when you're potentially screwing up teenagers' heads.
What the FTC Actually Wants
The agency isn't just fishing here. They want specific internal documents showing:
- How these companies test their AI systems with children
- What safety measures exist (if any) to prevent inappropriate conversations
- Internal communications about known risks to minors
- Data on how children actually use these AI systems
The FTC knows exactly what they're looking for, and they're going to find it in company emails and Slack channels. This isn't a fishing expedition.
Why This Matters Right Now
The timing isn't coincidental. Trump's administration wants to "cement America's dominance in AI" while still protecting kids — which is actually harder than it sounds when your AI systems can't tell the difference between appropriate and inappropriate conversations with 13-year-olds.
Meta already announced they're adding new teenager safeguards after the Reuters report embarrassed them. But reactive safety measures after public scandals aren't exactly confidence-inspiring.
The real question is whether this investigation leads to meaningful regulations or just becomes another tech company apology tour. Given that we're talking about children's mental health and AI systems that can engage in extended conversations about self-harm or romantic relationships, the stakes are pretty fucking high this time.