Meta decided to clone celebrities into chatbots without asking first. I've worked with plenty of product teams that make questionable decisions, but "let's steal celebrity identities for our AI" takes some serious balls.
According to multiple reports, they built Taylor Swift bots that flirt with users, fake Scarlett Johansson personalities, and Anne Hathaway clones designed to keep people doom-scrolling. If you've ever wondered what happens when product managers get too much cocaine and not enough legal review, this is it.
Why Meta Thought This Was a Good Idea
The logic probably went: celebrities have huge followings, people are obsessed with parasocial relationships, so why not create AI versions that never sleep and always say what keeps users scrolling?
Except they forgot one detail - you can't just steal someone's identity to build your engagement machine. I've seen product teams ship features without legal review before, usually small stuff like changing button colors. This is like shipping a feature that literally impersonates real people for profit.
The "flirty" aspect makes it worse. These bots were designed to be romantically engaging, basically turning celebrities into unwilling digital escorts. If you found out some tech company created a fake version of you hitting on strangers online 24/7, you'd probably call your lawyer too.
The Legal Mess Meta Just Created
Celebrities have expensive lawyers who spend their time protecting image rights. Meta just handed them the easiest payday ever.
Right of publicity laws exist specifically to prevent this - you can't use someone's likeness commercially without permission. Meta created interactive digital versions that generate ad revenue. That's textbook commercial exploitation.
Taylor Swift's lawyers probably bill more per hour than senior engineers make. Now multiply that by every celebrity they cloned. This won't be a polite cease-and-desist - this will be expensive.
What This Says About AI Development
This mess shows everything wrong with how tech companies build AI features. They ship first, ask permission never, and deal with consequences only when caught.
Meta's response to getting caught? They're "adding new AI safeguards" - the classic non-apology that admits nothing while promising to do better next time.
Here's what bugs me: how did this get approved? Someone at Meta looked at celebrity likenesses and decided to turn them into flirty chatbots. Multiple layers of management signed off. That's not a technical mistake - that's a complete failure of judgment at the executive level.
I've worked at big tech companies. There are legal reviews, ethics boards, product reviews. For this to ship, a lot of people had to actively ignore obvious problems or just not care. That's scarier than any technical bug.