"Precedents for the Unprecedented: Historical Analogies for Thirteen Artificial Superintelligence Risks" by James_Miller

"Precedents for the Unprecedented: Historical Analogies for Thirteen Artificial Superintelligence Risks" by James_Miller

Author: LessWrong January 19, 2026 Duration: 2:03:49
Since artificial superintelligence has never existed, claims that it poses a serious risk of global catastrophe can be easy to dismiss as fearmongering. Yet many of the specific worries about such systems are not free-floating fantasies but extensions of patterns we already see. This essay examines thirteen distinct ways artificial superintelligence could go wrong and, for each, pairs the abstract failure mode with concrete precedents where a similar pattern has already caused serious harm. By assembling a broad cross-domain catalog of such precedents, I aim to show that concerns about artificial superintelligence track recurring failure modes in our world.

This essay is also an experiment in writing with extensive assistance from artificial intelligence, producing work I couldn’t have written without it. That a current system can help articulate a case for the catastrophic potential of its own lineage is itself a significant fact; we have already left the realm of speculative fiction and begun to build the very agents that constitute the risk. On a personal note, this collaboration with artificial intelligence is part of my effort to rebuild the intellectual life that my stroke disrupted and hopefully push it beyond where it stood before.

Section 1: Power Asymmetry [...]

---

First published:
January 16th, 2026

Source:
https://www.lesswrong.com/posts/kLvhBSwjWD9wjejWn/precedents-for-the-unprecedented-historical-analogies-for-1

---



Narrated by TYPE III AUDIO.


Dive into a stream of ideas where technology, culture, philosophy, and society intersect, all through the lens of the LessWrong (Curated & Popular) podcast. This isn't a traditional talk show with hosts, but rather a curated audio library of the most impactful writing from the LessWrong community. Each episode is a narration of a full post, selected for its high value and interesting arguments, focusing on pieces that have been formally curated or have garnered significant community approval. You'll hear clear, thoughtful readings of essays that tackle complex topics like artificial intelligence, rational thinking, moral philosophy, and the forces shaping our future. The audio format lets you absorb these dense, often paradigm-shifting concepts during a commute or a walk, turning written analysis into an immersive listening experience. This particular feed is deliberately selective, offering a manageable stream of the community's standout work. For those who want an even deeper dive into the discussion, there are broader feeds available. The LessWrong (Curated & Popular) podcast serves as an intellectual filter, delivering the signal through the noise and inviting you to engage with some of the most rigorously examined ideas on the internet.
Author: Language: English Episodes: 100

LessWrong (Curated & Popular)
Podcast Episodes
"Deep learning as program synthesis" by Zach Furman [not-audio_url] [/not-audio_url]

Duration: 1:11:42
Audio note: this article contains 73 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. Epistemic status: This post is a synthesis of idea…
"Why I Transitioned: A Response" by marisa [not-audio_url] [/not-audio_url]

Duration: 21:17
Fiora Sunshine's post, Why I Transitioned: A Case Study (the OP) articulates a valuable theory for why some MtFs transition. If you are MtF and feel the post describes you, I believe you. However, many statements from th…
"Claude’s new constitution" by Zac Hatfield-Dodds [not-audio_url] [/not-audio_url]

Duration: 11:56
Read the constitution. Previously: 'soul document' discussion here. We're publishing a new constitution for our AI model, Claude. It's a detailed description of Anthropic's vision for Claude's values and behavior; a holi…
"What Washington Says About AGI" by zroe1 [not-audio_url] [/not-audio_url]

Duration: 14:16
I spent a few hundred dollars on Anthropic API credits and let Claude individually research every current US congressperson's position on AI. This is a summary of my findings. Disclaimer: Summarizing people's beliefs is…
"How AI Is Learning to Think in Secret" by Nicholas Andresen [not-audio_url] [/not-audio_url]

Duration: 37:44
On Thinkish, Neuralese, and the End of Readable Reasoning In September 2025, researchers published the internal monologue of OpenAI's GPT-o3 as it decided to lie about scientific data. This is what it thought: Pardon? Th…
"On Owning Galaxies" by Simon Lermen [not-audio_url] [/not-audio_url]

Duration: 5:37
It seems to be a real view held by serious people that your OpenAI shares will soon be tradable for moons and galaxies. This includes eminent thinkers like Dwarkesh Patel, Leopold Aschenbrenner, perhaps Scott Alexander a…