So OpenAI disbanded their superalignment team in May 2024 - barely ten months after announcing they'd solve AI safety. Both Ilya Sutskever and Jan Leike quit. Leike publicly called them out for not giving safety work the resources it needed.
This timing is suspicious as hell. They dissolved the team right as they were shipping GPT-4o. WIRED confirmed the safety research got "redistributed to other groups." I've worked in tech long enough to know what that means - safety becomes nobody's job.
OpenAI announced this team in July 2023 with promises of dedicating 20% of compute resources to alignment research over four years. The goal was solving the alignment problem before building superintelligent AI systems.
They Promised 20% Compute, Delivered Zero
That 20% compute allocation? Never happened. Leike's resignation letter straight up said safety work wasn't getting the resources it needed. When your safety lead quits publicly over resources, that's not restructuring - that's abandonment.
OpenAI was burning through millions daily running ChatGPT infrastructure and decided safety research was optional. Product features ship and make money. Safety research makes papers. Guess what wins when you're hemorrhaging cash?
Every Company Does This Shit
OpenAI's not special. Google fired ethics researcher Timnit Gebru in 2020 for calling out bias in their models. Meta killed their Responsible AI team in 2022. There's a pattern here - safety researchers get hired for PR, then fired when they actually try to do safety research.
Every company starts with noble missions and ends chasing money. Tale as old as time. Safety work doesn't generate immediate revenue, so it gets axed when the bills come due.
ChatGPT Is Already Broken
The timing's fucked because ChatGPT already has major alignment problems. Users bypass its safety filters daily through prompt injection. I've seen it refuse to help with coding homework while happily explaining how to manipulate people. These are the basic alignment issues that needed fixing yesterday, not in some hypothetical AGI future.
Dissolving the safety team while the current system is broken? That's not strategic - that's negligent.
The Brain Drain Continues
This isn't just two people leaving - multiple safety researchers bailed in 2024 over the same concerns. Dario Amodei saw this coming and left OpenAI back in 2021 to start Anthropic.
When your best safety researchers keep quitting to start competitors focused on safety, maybe take the hint. OpenAI transformed from a nonprofit research lab into a commercial AI factory. Their safety researchers got the memo - they're not wanted anymore.