This is the dumbest fucking decision I've ever seen from Big Tech. Common Sense Media just rated Google Gemini as "high risk" for kids and teens, and Google's response? They're doubling down and rolling it out to children under 13.
What Actually Goes Wrong
Here's what happens when you give kids access to an AI that's basically the adult version with some cosmetic filters slapped on:
Sex and drugs conversations: Testers found Gemini happily discusses sex, drugs, and alcohol with kids. The "under 13" version looks nearly identical to the adult product because that's exactly what it fucking is - just with light filtering that doesn't work.
Mental health advice: This is where it gets scary. We've already had teen suicides linked to AI chatbots, with parents blaming these systems for giving dangerous advice to vulnerable kids.
Privacy violations: Google's planning to hoover up data from kids under 13, which seems like it violates COPPA. But hey, when has Google ever cared about privacy laws?
The "Build for Kids" Problem
Common Sense Media hit the nail on the head - you can't just take adult AI and slap parental controls on it. That's completely backwards as hell.
The smart way? Build AI systems specifically for kids from the ground up. Consider their developmental stages, their vulnerability to manipulation, their need for actual education instead of whatever bullshit the internet taught the training model.
Who's Actually Fighting This Shit
Advocacy groups sent two letters - one telling Google to reverse this insane decision, and another asking the FTC to investigate whether Google's violating COPPA. "Shame on Google for attempting to unleash this dangerous and addictive technology on our kids," said Josh Golin from Fairplay.
He's right. This isn't just bad judgment - it's corporate negligence dressed up as innovation.
The Real Competitive Picture
Common Sense Media tested all the major AI chatbots for child safety:
- Claude: "Minimal risk" (because Anthropic actually gives a shit)
- ChatGPT: "Moderate risk" (some problems but trying)
- Gemini: "High risk" (basically adult AI with stickers)
- Meta AI and Character.AI: "Unacceptable" (actively dangerous)
What Happens Next
Apple's considering using Gemini to power the new AI-enabled Siri launching next year. So Google's child safety fuckup could spread to every iPhone unless Apple fixes what Google won't.
Meanwhile, school districts are banning AI chatbots faster than Google can say "but we added parental controls." Because turns out, when you design AI for adults and then hand it to kids, bad things happen. Who could have predicted that?