Geoffrey Hinton won the Nobel Prize in Physics last year for basically inventing the neural networks that make modern AI possible. This isn't some random professor with a blog - he's the guy who taught computers how to think. And now he's terrified of what he unleashed.
Hinton spent decades pushing AI forward, working at Google, training the algorithms that became ChatGPT's foundation. Then in 2023, he quit Google and started screaming warnings about AI risks. Today's bioweapon comments are his most terrifying prediction yet.
Here's what changed his mind: AI got too good, too fast. The systems he helped create can now understand complex scientific processes, synthesize information from millions of sources, and generate detailed instructions for pretty much anything. Including bioweapons.
Think about it. You want to know how to synthesize a dangerous pathogen? ChatGPT might refuse, but there are dozens of uncensored AI models online. You need lab equipment recommendations? AI can help. Want to optimize the delivery mechanism? AI's got you covered. The knowledge barriers that kept bioweapons in the hands of nation-states and terrorist organizations are crumbling.
Hinton argues this is different from nuclear weapons because those require rare materials like enriched uranium and massive industrial facilities. Bioweapons? You can potentially cook them up in a garage lab with equipment you order on Amazon. AI removes the expertise bottleneck - the years of specialized education needed to understand complex biochemistry.
His former colleague Yann LeCun, who runs AI at Meta, thinks Hinton is overreacting. LeCun argues that current large language models can't meaningfully interact with the physical world. But Hinton sees something LeCun doesn't - AI systems are getting smarter at an exponential pace. What seems impossible today might be routine next year.