AI Godfather Geoffrey Hinton warns that We're Creating 'Alien Beings that "Could Take Over"
So will AI wipe us out? According to Geoffrey Hinton, the 2024 Nobel laureate in physics, there's about a 10-20% chance of AI being humanity's final invention. Which, as the so-called Godfather of AI acknowledges, is his way of saying he has no more idea than you or I about its species-killing qualities. That said, Hinton is deeply concerned about some of the consequences of an AI revolution that he pioneered at Google. From cyber attacks that could topple major banks to AI-designed viruses, from mass unemployment to lethal autonomous weapons, Hinton warns we're facing unprecedented risks from technology that's evolving faster than our ability to control it. So does he regret his role in the invention of generative AI? Not exactly. Hinton believes the AI revolution was inevitable—if he hadn't contributed, it would have been delayed by perhaps a week. Instead of dwelling on regret, he's focused on finding solutions for humanity to coexist with superintelligent beings. His radical proposal? Creating "AI mothers" with strong maternal instincts toward humans—the only model we have for a more powerful being designed to care for a weaker one.
1. Nobody Really Knows the Risk Level Hinton's 10-20% extinction probability is essentially an admission of complete uncertainty. As he puts it, "the number means nobody's got a clue what's going to happen" - but it's definitely more than 1% and less than 99%.
2. Short-Term vs. Long-Term Threats Are Fundamentally Different Near-term risks involve bad actors misusing AI (cyber attacks, bioweapons, surveillance), while the existential threat comes from AI simply outgrowing its need for humans - something we've never faced before.
3. We're Creating "Alien Beings" Right Now Unlike previous technologies, AI represents actual intelligent entities that can understand, plan, and potentially manipulate us. Hinton argues we should be as concerned as if we spotted an alien invasion fleet through a telescope.
4. The "AI Mothers" Solution Hinton's radical proposal: instead of trying to keep AI submissive (which won't work when it's smarter than us), we should engineer strong maternal instincts into AI systems - the only model we have of powerful beings caring for weaker ones.
5. Superintelligence Is Coming Within 5-20 Years Most leading experts believe human-level AI is inevitable, followed quickly by superintelligence. Hinton's timeline reflects the consensus among researchers, despite the wide range.
Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
Episode 2236: Stephen Riggio on the greatest Italian novel you've never heard of
Episode 2235: Peter Osnos on LBJ & McNamara - the Vietnam Partnership Bound to Fail
Episode 2234: Terrence Sejnowski asks whether our brains and AI are converging
Episode 2233: More than a Tool: How AI is becoming an independent actor in our world
Episode 2232: Mark Galeotti on whether Putin is a prisoner or a master of history
Episode 2231: Bill Adair on the Epidemic of Political Lying, why Republicans do it more, and how it could destroy American democracy
Episode 2230: Seth Godin on why we are all hard-wired for hope
Episode 2229: Robert Skidelsky worries about the Human Condition in the Age of Artificial Intelligence
Episode 2228: Bethanne Patrick on Al Pacino, the Queen, Bob Woodward and Ketanji Brown Jackson
Episode 2227: Allie Funk on how to Build Online Trust
Episode 2226: Why the Economics of our AI Age might be unlike all previous Tech Revolutions
Episode 2225: Katherine Epstein on how American Historians are Killing History
Episode 2224: Celeste Marcus on why the humanism of Agnieszka Holland's movies remain so relevant in our Trumpian age