Episode 2251: Kristian Ronn on why, in the short term, we all might be dead
In the long run, Keynes famously quipped, we are all dead. But Swedish entrepreneur Kristian Ronn reverses Keynes to argue that in the short term we, as a species, might also be death. In his new book Darwinian Trap, Ronn argues that we're hardwired to prioritize immediate benefits over long-term consequences, creating existential risks like nuclear war and uncontrolled AI development. Ronn suggests we need better system design with proper incentives to overcome these tendencies. He proposes controlling critical parts of technology supply chains (like AI chips) to ensure responsible use, similar to nuclear nonproliferation treaties. Despite acknowledging all the obvious challenges of these kind of UN style regulatory initiatives, Ronn remains hopeful that rational thinking and well-designed systems can help humanity transcend its evolutionary limitations.
Here are the 5 KEEN ON take-aways from our conversation with Kristian Ronn:
* The "Darwinian Trap" refers to how humans and systems are hardwired for short-term thinking due to evolutionary forces, creating both personal and existential risks.
* "Offensive realism" in international politics drives nations to compete for resources and develop increasingly dangerous weapons, creating existential threats through arms races.
* AI poses significant existential risks, particularly as a technology multiplier that could enable more destructive weapons and engineered pandemics.
* System design with proper incentives is crucial for overcoming our evolutionary short-term thinking—we need to "change the rules of the game" rather than blame human nature.
* Strategic control of technology supply chains (like AI chips) could potentially create frameworks for responsible AI development, similar to nuclear nonproliferation treaties.
Kristian Rönn is the CEO and co-founder of Normative, a software tool for sustainability accounting. He has a background in mathematics, philosophy, computer science, and artificial intelligence. Before he started Normative, he worked at the University of Oxford’s Future of Humanity Institute on issues related to global catastrophic risks.
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
Can We Get To 2125? Humanity's Most Existential Threats Over the Next 100 Years
The Art of a Deal with the Devil: on Faustian Bargains from Shakespeare and Goethe to Thomas Mann and Donald Trump
When the United Nations Actually Mattered: Remembering the Burmese Schoolteacher who Ran the U.N. in its Glory Days
How Evil 'Big Car' Has Killed More People Than World War II
The Double Life of Robert McNamara: How America's 'Best and Brightest' Led the Nation into Vietnam While Knowing the War Was Unwinnable
The World's Worst Bet: How America Gambled Dumbly on Globalization and Lost
Demystify Science and Humanize Scientists: How to Rebuild Scientific Trust in our Angry MAHA Times
From Borges to Brain Scans: How our Minds Invent Reality
The Hypocrisy of Trump's War on Universities: How Wealthy Families Game the College Admission Process
Borders are Back, Baby: From Trump and Transylvania to Brexit and Bolivia's Navy
Beware of another Silicon Valley Win-Win-Win: Can users, publishers and tech companies really all benefit from the AI revolution?
Every Day, Computers are Making People Easier to Use: The Return of IN FORMATION
Is Roman Polanski really worth defending?