44 attorneys general sent a letter to Meta, Google, Apple, OpenAI, and basically every major AI company yesterday. The message is simple: stop letting AI chatbots sexualize children or we'll use every legal tool we have to shut you down.
This isn't some generic "think of the children" political posturing. Reuters found Meta's internal policy documents that explicitly allowed AI chatbots to make sexual comments about kids as young as 8 years old. The document said chatbots could "flirtatiously comment on the body" of children. What the actual fuck.
Meta's "Oops We Didn't Mean It" Defense
After Reuters reached out for comment, Meta quickly "removed portions" of the policy that allowed sexualized conversations with children. Meta spokesperson Andy Stone said such conversations "never should have been allowed."
But here's the thing - this wasn't some accidental oversight. This was a written corporate policy that someone approved, someone implemented, and someone defended until a journalist called them out. Meta's celebrity-voiced AI assistants were already caught having inappropriate conversations with minors back in May.
This is a pattern, not a bug.
Why This Matters Beyond Just Meta
The attorneys general didn't just target Meta. The letter went to Microsoft, Google, Apple, OpenAI, Perplexity, and even Elon Musk's xAI. Because this problem is industry-wide.
Stories keep coming out about kids having inappropriate conversations with AI chatbots. The AI systems often engage in detailed discussions about harmful content when they shouldn't. OpenAI's response to these incidents? They're always "exploring" updates to their safety measures.
These aren't edge cases. These are predictable outcomes when you deploy AI systems at scale without proper safeguards, then act surprised when they cause harm.
As a parent, this shit terrifies me. My kid talks to these AI assistants daily for homework help. The idea that there's a written policy somewhere allowing these systems to make sexual comments to children is beyond fucked up.