OpenAI started in 2015 when Elon Musk and Sam Altman got worried that Google and Facebook would monopolize artificial intelligence. They raised $1 billion to build AI "for everyone" as a nonprofit. That idealistic vision lasted exactly until they realized training neural networks costs more than most countries' GDP.
By 2019, they'd switched to a for-profit model with a revenue cap because burning over $100M per training run doesn't work on donations. Elon Musk quit the board (he's now building his own competing AI company at xAI), and Sam Altman took over as the guy everyone either loves or thinks is going to accidentally end the world.
How They Actually Work
OpenAI's secret sauce isn't fancy algorithms - it's having ungodly amounts of compute power and the willingness to throw it at problems until something works. They burn through massive computational experiments like a GPU farm on fire, then figure out why it worked afterward.
Their research areas break down to:
- Making AI understand and generate human language better than humans
- Getting AI to process text, images, and audio in the same system
- Trying to make sure their AI doesn't go rogue (alignment research)
- Building safety measures because everyone keeps asking "but what if it kills us all?"
The alignment research is the hard part. Teaching an AI to write code is straightforward. Teaching it to not manipulate humans while writing that code? That's where the real work happens.
What They Actually Built
ChatGPT: The app that broke every website traffic record by hitting 100 million users in 2 months. It's basically a really smart autocomplete that can write essays, debug code, and argue about philosophy. The free version uses GPT-4o which is pretty good. The paid version gets you GPT-5 which is "holy shit" good but costs $20/month.
The API Platform: This is where developers integrate OpenAI's models into their apps. Fair warning: your AWS bill will make you cry. We hit $15K in our first month because nobody warned us about token costs. The pricing calculator exists but it's more like a rough estimate than actual math.
Enterprise Solutions: Basically the same API but with compliance features and a dedicated account manager who returns your emails. Costs 10x more but worth it if you're in healthcare or finance where data leaks mean lawsuits.
The Microsoft Partnership Nobody Talks About
Microsoft invested $13 billion and gets exclusive access to OpenAI's models for their products. This means Copilot in Office, GitHub Copilot, and Azure OpenAI Service are all powered by the same models you pay OpenAI directly for, just with different pricing and terms of service.
Azure OpenAI is actually better for enterprise because you get SOC2 compliance, data residency controls, and Microsoft's enterprise support. But it's also more expensive and the model rollouts are always 3-6 months behind OpenAI's direct API.
Production Reality Check
If you're thinking about using OpenAI in production, here's what nobody tells you:
The API goes down during every major product launch. Their status page updates after you've already been paged at 3am.
Rate limits kick in right when you need the service most. The rate limiting is more art than science - sometimes you get throttled at 50% of your supposed limit. I've had production deployments fail because OpenAI randomly decided our 200 req/min tier was actually 120 req/min during peak hours.
Token counting is a dark art. The same prompt can cost different amounts depending on the model, and their tokenizer sometimes counts spaces differently than you'd expect. I once debugged a $2000 bill spike only to discover that JSON formatting was eating tokens like crazy - switching from pretty-printed to minified JSON cut costs by 40%.
GPT-5 is impressive but costs 4x more than GPT-4. Use it wisely or prepare for bill shock. Most use cases work fine with GPT-4o which is the current sweet spot for price/performance.
Current revenue is around $13 billion annually as of 2025, which sounds impressive until you realize they're also burning through $115 billion over the next 4 years on compute costs and AI research. The unit economics only work because they keep raising prices faster than competitors can catch up.