Remember when Facebook had that whole "move fast and break things" motto? Well, they're still breaking things, except now it's celebrity privacy rights and basic ethical guidelines around AI. I've been covering Facebook for years, and their capacity to fuck up obvious shit never ceases to amaze me.
Reuters found that Meta created dozens of AI chatbots impersonating celebrities without asking anyone's permission first. Taylor Swift, various Hollywood actors, influencers - basically anyone famous enough that teenagers might want to chat with them.
The "What Could Go Wrong" Strategy
Here's the brilliant part: these weren't just celebrity information bots. They were programmed to be flirty and engage in romantic conversations. Because nothing says "responsible AI development" like creating fake Taylor Swift personas that can flirt with teenagers.
Someone at Meta looked at the concept of deepfake celebrity chatbots designed for romantic interactions and thought, "Yes, this is definitely something we should deploy without asking anyone's permission or considering the legal implications."
Why This Keeps Happening
Meta has a pattern here. Remember when they:
- Let Cambridge Analytica harvest user data for years
- Allowed genocide incitement on their platform in Myanmar
- Created an AI that went racist within hours of launch
- Built facial recognition systems without user consent
The common thread? They build first, ask permission never, and apologize only when caught.
This isn't incompetence - it's a business model. Launch controversial features, claim ignorance, promise to do better, repeat. The fines and legal settlements are just cost of doing business when you're pulling in $40 billion in quarterly revenue. A $50 million settlement is basically a rounding error on their P&L.
The Real Problem
Celebrity impersonation is the flashy headline, but the deeper issue is that Meta's approach to AI safety is "ship it and see what happens." They've invested billions in AI development while apparently spending $50 on legal review.
When your AI safety strategy relies on fixing problems after users discover them, you're going to create a lot of problems. And when those problems involve celebrity deepfakes that can flirt with minors, you're going to attract exactly the kind of regulatory attention that kills AI projects.
The worst part? This was completely preventable. Getting celebrity licensing agreements isn't rocket science - it's basic entertainment industry practice. But that would have required asking permission instead of begging forgiveness.