Anthropic announced they're changing Claude's data policies on August 28, 2025, giving users until September 28 to opt out of having their conversations used for AI training. This is the classic tech company playbook: promise privacy to get users, then quietly change the rules once you need the data.
From "We Delete Everything" to "We Keep Everything" in One Policy Update
Remember when Anthropic promised your chat data wouldn't be used for training and got deleted after 30 days? Yeah, that's dead. Now they're keeping your conversations for up to five fucking years unless you manually opt out. That's going from privacy-first to data-hoarding in one policy change.
Of course, this only affects us regular users - Free, Pro, Max, and Claude Code. Business customers with Gov, Work, Education, or API access get to keep their privacy because they pay enterprise prices. Same bullshit OpenAI pulls - if you pay enough, your data stays private. If you're just a consumer, you're the product.
The Real Reason: They're Running Out of Good Training Data
Anthropic's spinning this as "making our systems for detecting harmful content more accurate," but let's be real - they need data to stay competitive. TechCrunch nailed it: "Like every other large language model company, Anthropic needs data more than it needs people to have fuzzy feelings about its brand." They're basically admitting that our privacy is worth less than their model performance.
Perfect timing too - OpenAI just got court-ordered to keep all ChatGPT conversations forever because of the NYT lawsuit. So now every AI company is thinking "fuck it, if we're being forced to keep legal data anyway, might as well use consumer data for training." OpenAI's COO calls the court demands "sweeping and unnecessary" while probably laughing because it gives them cover to pull the same shit.
They're Using Classic Dark Patterns (Shocking, I Know)
The implementation is exactly as scummy as you'd expect. Users get a pop-up with a big black "Accept" button, while the data sharing toggle is buried in smaller text below and defaults to "On." The Verge called it out for the obvious dark pattern - make accepting fast and easy, make opting out hard to find.
Privacy experts keep saying AI is too complex for meaningful consent, but let's be honest - these companies make it complex on purpose. The FTC has specifically warned AI companies about "surreptitiously changing terms of service" and burying shit in legalese. The agency has been cracking down on dark patterns that manipulate users into giving up privacy. Anthropic just did exactly that and the FTC will probably send them a strongly worded letter.
Everyone's Doing It, So Why Not Us?
This isn't just Anthropic being shitty - it's the entire industry saying "fuck privacy, we need data." Meta's been confusing users about their data policies for months. Every AI company is realizing they can't compete on model quality without hoarding user data.
For Claude users, September 28 is decision time: manually opt out or have your personal conversations become training data for their next model. And with the FTC down to three commissioners after Trump fired the Democrats, don't expect any regulatory pushback.
The message is clear: even the "ethical" AI companies will choose model performance over your privacy when push comes to shove. We're all just training data now.