Remember when Sam Altman begged Congress for AI regulation in 2023? Called his own tech "risky" and capable of "significant harm to the world." Asked for government oversight like some concerned citizen instead of a CEO trying to lock out competitors.
Fast forward two years and the same motherfuckers are burning $100+ million on political influence to gut the regulations they pretended to want. Turns out Sam Altman was lying - shocker! - and just wanted to be regulated first before anyone else could catch up.
Translation: once the money started flowing, all that safety theater went out the window faster than a failed startup.
The Money Trail: How to Buy a Democracy
OpenAI's lobbying spending exploded 6x from $260k to $1.76 million in 2024, with $620k in Q2 2025 alone. That's some serious regulatory capture money. Anthropic doubled their lobbying budget from $280k to $720k because apparently safety-focused AI companies also need to buy politicians.
Meta dropped tens of millions into a California Super PAC targeting the 2026 governor's race. Smart move - California basically writes tech policy for the rest of the country, so buying Sacramento gets you national influence on the cheap.
The "Leading Our Future" Propaganda Machine
"Leading Our Future" - what a fucking joke of a name. It's backed by OpenAI's Greg Brockman and Andreessen Horowitz, targeting candidates in New York, Illinois, and California with "bipartisan" spending. Bipartisan means they're buying Democrats and Republicans equally - corruption doesn't discriminate.
Their messaging strategy is classic corporate doublespeak: candidates who support "AI innovation" and oppose "restrictive regulation." Translation: politicians who'll let us do whatever the fuck we want and call it progress. Any regulation is "restrictive" when you're trying to maximize shareholder value.
This is the same playbook tobacco, oil, and pharma used. Fund friendly politicians, frame opposition as anti-innovation, capture the regulatory process before rules get written. It works because most politicians are either bought or too stupid to understand the technology they're regulating.
What Real AI Safety Regulation Would Look Like (And Why They're Panicking)
Actual AI safety regulation would require pre-deployment testing, transparency about training data, and liability when AI systems cause harm. Basic shit like "maybe don't release systems that can manipulate elections" or "perhaps companies should be liable when their AI causes damage."
But compliance costs money and slows down the race to AGI, which terrifies these companies. They'd rather spend $100 million buying politicians than $10 million on safety testing. It's cheaper to capture regulators than actually make safe products.
The industry's panic makes perfect sense: they called for regulation when they thought it would protect their moats, but now that real requirements might hurt profits, they're in full regulatory capture mode. Sam Altman wanted regulation for everyone else - just not for OpenAI once they hit $3 billion in revenue.