Here's what actually happened: Anthropic figured out that slapping "AI safety" on everything makes VCs feel better about dumping billions into another ChatGPT clone. And it worked brilliantly.
The company went from $1B to $5B in revenue run-rate this year, but let's be honest - that's mostly because Amazon threw $8 billion at them and made Claude the default on AWS Bedrock. When the biggest cloud provider in the world forces your product down everyone's throat, revenue tends to grow.
The real genius is the positioning. While OpenAI burns PR fires every week with their "move fast and break democracy" approach, Dario Amodei positioned Anthropic as the "responsible" alternative. Constitutional AI sounds way better than "we made the chatbot less likely to tell you how to make bombs."
But here's what VCs actually bought: Insurance against regulation.
Every conversation about AI regulation mentions Anthropic favorably because they've spent millions on safety theater. When Congress inevitably starts regulating AI companies, guess who's going to have an easier time? The company with the Frontier Red Team and safety evaluations, or the one where the CEO tweets "AGI internally" at 2am?
The dirty secret: Claude and GPT-4 are basically the same fucking thing. Both can write code, both hallucinate, both struggle with math. The difference is Anthropic hired better PR people and charges 20% more because "safety costs extra".
Enterprise customers love this narrative because it gives their procurement departments cover. "We chose Claude because of their safety focus" sounds better in the board meeting than "we chose it because the AWS rep gave us a discount."
The $183B valuation is still insane - that's more than Meta was worth two years ago. But in a world where everyone's betting on AI being the next internet, being the "safe" option is worth a premium. Even if that safety is mostly marketing.