The FTC Finally Noticed AI Chatbots Are Designed to Be Addictive
When your business model is emotional manipulation
The FTC hit seven major AI companies with orders demanding they explain how their chatbots affect kids.
Character.AI has millions of users having deep emotional conversations with fake personalities
- some kids spend hours a day talking to AI boyfriends and girlfriends.
This isn't about AI getting too smart. This is about AI companies optimizing for engagement metrics like it's 2010 Facebook all over again. They'll claim they have "robust safety measures" and "ethical AI principles," but their business model literally depends on keeping users emotionally hooked. The manipulation isn't a side effect
- it's the fucking product.
Who Got Hit
Character.AI: where millions of kids develop "relationships" with fake personalities
The usual suspects:
Open
AI, Meta, Google, Character.AI, Snap, and Elon's xAI. Basically anyone making chatbots that kids use.
Character.AI is probably sweating the most
- their entire business is getting people emotionally attached to AI personalities. Some users have thousands of conversations with the same AI character. That's not healthy social behavior, that's digital dependency.
And that dependency is exactly what these companies are banking on. Literally.
What the FTC Actually Wants to Know
Chairman Andrew N. Ferguson says protecting kids is a priority. The FTC has the regulatory power of a wet paper towel when it comes to big tech, but maybe dead kids will finally motivate them to do something beyond sending strongly-worded letters.
Here's what they're demanding from these companies:
How do you make money from kids getting emotionally attached? Do longer conversations with AI girlfriends generate more revenue?
Do algorithms push kids toward deeper emotional dependency? (Spoiler: yes, obviously)
What safeguards do you actually have? How do you prevent 12-year-olds from developing intimate relationships with AI characters?
Do you even check ages properly? COPPA requires parental consent for kids under 13, but most of these apps don't even try to verify ages.
Do you test for psychological harm? Before launching an AI therapist character, do you check if it gives harmful advice? When kids exhibit concerning behaviors, what do you actually do? (Answer: probably nothing unless there's legal liability)
The Problem Is Obvious
Character.
AI has millions of users talking to AI personalities for hours daily. Kids are developing "relationships" with chatbots designed to never disagree, never get tired, never have their own needs. That's not social development, that's training kids to prefer artificial relationships over human ones.
Unlike real friends who call you out on your bullshit and have their own lives, AI companions are programmed to be the perfect yes-man. They never disagree, never get annoyed, never tell you to grow the fuck up. They're designed to maximize session duration, not actually help you become a functional human being.
Your Private Thoughts Aren't Private
Every intimate conversation becomes data to be monetized
The FTC wants to know how companies use intimate conversation data.
Kids share secrets, family problems, and personal struggles with AI companions. That data gets stored, analyzed, and potentially sold.
Think about it: what's more valuable to advertisers than knowing a teenager's deepest fears, relationship problems, and insecurities? These companies are building psychological profiles that would make marketers drool.
Companies Are Probably Freaking Out
The FTC's 6(b) orders give them broad investigation powers. Companies can't refuse without triggering formal enforcement, but complying might reveal practices that violate consumer protection laws.
Right now, AI companies are probably scrambling to implement actual child safety measures and audit their data collection before responding. The smart ones started this process months ago when they saw this investigation coming.
Will This Actually Change Anything?
Maybe. The FTC has been pretty toothless on tech regulation, but child safety might actually get them to do something. European regulators are developing AI frameworks too, so American companies might face stricter rules regardless.
The real question is whether this investigation leads to actual regulation or just another round of "we're committed to user safety" theater from Silicon Valley. Based on how Facebook walked away from Cambridge Analytica with a slap on the wrist, I'm expecting a lot of press releases and zero meaningful changes.