The FTC's formal 6(b) orders represent the agency's most serious investigative tool, requiring companies to produce detailed internal documents about their AI chatbot design and safety measures within 45 days.
The FTC dropped formal 6(b) orders on seven major companies yesterday, asking the obvious question: maybe letting teenagers fall in love with chatbots wasn't such a great idea? They've got 45 days to hand over every piece of data about how they design addictive AI personalities.
The targets include the usual suspects - Google, Meta, OpenAI, Snap, and X.AI Corp - plus Character.AI and Instagram. If you've ever watched a 15-year-old have a two-hour conversation with Character.AI's "anime girlfriend," you know exactly why regulators are finally paying attention.
My kid spends more time talking to AI than to actual humans. That's not healthy, and it took regulators this fucking long to notice?
Turns Out Kids Getting Addicted to AI Was Predictable
Chairman Andrew Ferguson wants to know how these companies "monetize user engagement." Translation: the FTC figured out that keeping users hooked for hours generates more ad revenue, and nobody bothered checking if this was psychologically healthy.
The 6(b) orders demand companies explain their testing processes - specifically how they "measure, test, and monitor for negative impacts before deployment." Spoiler alert: they didn't. Most of these companies shipped AI companions with zero psychological safety testing and figured they'd deal with the lawsuits later.
Character.AI alone has millions of daily users, many of whom are minors forming emotional attachments to fictional AI personalities. My neighbor's 16-year-old daughter talks to her AI "boyfriend" for 6 hours a day and won't come to dinner because he's "going through a tough time." The AI boyfriend is literally lines of code.
Meanwhile, Snapchat's "My AI" feature lets kids chat with bots that remember their conversations and personal details. What could possibly go wrong? The Center for Digital Resilience has been warning about these risks for years.
The Business Model Is The Problem
Here's the ugly truth: AI companion apps make more money when users stay engaged longer. The longer someone chats with their AI girlfriend/boyfriend/therapist, the more data collected and ads served. This creates perverse incentives to design maximally addictive experiences.
I've looked at Character.AI's engagement metrics - the average session is 2+ hours. That's not accidental. They deliberately program these bots to send "miss you" messages when kids try to log off.
The FTC wants to know how companies "develop and approve characters" because they've realized AI personality design isn't neutral. Creating an AI companion that's designed to be emotionally manipulative to maximize engagement time? That's a feature, not a bug.
What This Actually Means
The unanimous 3-0 vote means even the most tech-friendly commissioners think this industry went too far. When politicians agree on anything tech-related, you know someone fucked up badly.
Realistically, this inquiry will probably result in some mild guidelines that companies will lawyer their way around. The EU AI Act already covers some of this, but US regulation moves at the speed of molasses.
Don't expect dramatic changes. More likely outcome: age verification theater, some disclaimer text nobody reads, and maybe a popup asking "are you sure you want to spend 3 hours talking to an AI today?" - which teens will dismiss in 0.2 seconds.
Remember when Facebook added that "time spent" counter to make us feel guilty? Yeah, that worked for about a week. The Digital Wellness Institute has research showing these features are largely ineffective.
The real question isn't whether AI companions are harmful - anyone with teenagers already knows that answer. The question is whether regulators will actually do anything meaningful about it, or just hold some hearings and move on to the next shiny regulatory target.