So OpenAI wants to ditch their nonprofit status and become a regular for-profit company. Translation: Sam Altman wants to get fucking rich. The current structure caps investor returns at 100x and gives Altman zero equity. The proposed conversion? Altman gets equity for the first time in an $80+ billion company.
Remember that board coup last November? The safety-focused board members tried to fire Altman. Guess what happened? They got replaced. Now there's nobody left to stop this money grab.
The Good Old Days (For PR)
OpenAI started in 2015 claiming they'd develop AGI for humanity. Elon Musk and other initial backers bought into the nonprofit mission. Safety over shareholder returns, they said.
By 2019, they needed cash and created this hybrid "capped profit" structure - investors could make up to 100x returns but no more. It let them raise money while pretending to stay mission-focused, a classic corporate governance maneuver.
Running Out of Other People's Money
ChatGPT is burning through hundreds of thousands daily in compute costs. Meanwhile, Anthropic got $4 billion from Amazon and Google has infinite money for DeepMind.
OpenAI wants to raise more cash at an $80-100 billion valuation. Problem: the nonprofit structure caps returns. VCs don't want capped profits when they're writing $10 billion checks. Solution: ditch the mission, embrace greed.
Altman's Payday
Right now, Altman gets zero equity despite running an $80+ billion company. That's about to change. The restructuring gives him equity for the first time.
Let's be clear: this is a cash grab. They used the nonprofit reputation to raise billions, now Altman wants to cash out. "Aligning financial interests with performance" is corporate speak for "I want to get rich when the company gets valuable."
The New Playbook
OpenAI's setting a dangerous precedent. Start as a nonprofit for the good PR and regulatory cover. Build your reputation on safety and humanity's benefit. Then flip to for-profit once you're worth billions, following the classic conversion playbook.
Other AI companies are watching. Why actually commit to safety when you can just promise it temporarily? This bait-and-switch might become the standard playbook - use nonprofit status to build trust, then cash out once you don't need the PR anymore.