"IABIED Book Review: Core Arguments and Counterarguments" by Stephen McAleese

"IABIED Book Review: Core Arguments and Counterarguments" by Stephen McAleese

Author: LessWrong February 5, 2026 Duration: 50:18
The recent book “If Anyone Builds It Everyone Dies” (September 2025) by Eliezer Yudkowsky and Nate Soares argues that creating superintelligent AI in the near future would almost certainly cause human extinction:

If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.

The goal of this post is to summarize and evaluate the book's key arguments and the main counterarguments critics have made against them.

Although several other book reviews have already been written I found many of them unsatisfying because a lot of them are written by journalists who have the goal of writing an entertaining piece and only lightly cover the core arguments, or don’t seem understand them properly, and instead resort to weak arguments like straw-manning, ad hominem attacks or criticizing the style of the book.

So my goal is to write a book review that has the following properties:

  • Written by someone who has read a substantial amount of AI alignment and LessWrong content and won’t make AI alignment beginner mistakes or misunderstandings (e.g. not knowing about the [...]
---

Outline:

(07:43) Background arguments to the key claim

(09:21) The key claim: ASI alignment is extremely difficult to solve

(12:52) 1. Human values are a very specific, fragile, and tiny space of all possible goals

(15:25) 2. Current methods used to train goals into AIs are imprecise and unreliable

(16:42) The inner alignment problem

(17:25) Inner alignment introduction

(19:03) Inner misalignment evolution analogy

(21:03) Real examples of inner misalignment

(22:23) Inner misalignment explanation

(25:05) ASI misalignment example

(27:40) 3. The ASI alignment problem is hard because it has the properties of hard engineering challenges

(28:10) Space probes

(29:09) Nuclear reactors

(30:18) Computer security

(30:35) Counterarguments to the book

(30:46) Arguments that the books arguments are unfalsifiable

(33:19) Arguments against the evolution analogy

(37:38) Arguments against counting arguments

(40:16) Arguments based on the aligned behavior of modern LLMs

(43:16) Arguments against engineering analogies to AI alignment

(45:05) Three counterarguments to the books three core arguments

(46:43) Conclusion

(49:23) Appendix

---

First published:
January 24th, 2026

Source:
https://www.lesswrong.com/posts/qFzWTTxW37mqnE6CA/iabied-book-review-core-arguments-and-counterarguments

---



Narrated by TYPE III AUDIO.

---

Images from the article:

Flowchart showing the beliefs of AI skeptics, singularitarians, the IABIED authors, and AI successionists.

Dive into a stream of ideas where technology, culture, philosophy, and society intersect, all through the lens of the LessWrong (Curated & Popular) podcast. This isn't a traditional talk show with hosts, but rather a curated audio library of the most impactful writing from the LessWrong community. Each episode is a narration of a full post, selected for its high value and interesting arguments, focusing on pieces that have been formally curated or have garnered significant community approval. You'll hear clear, thoughtful readings of essays that tackle complex topics like artificial intelligence, rational thinking, moral philosophy, and the forces shaping our future. The audio format lets you absorb these dense, often paradigm-shifting concepts during a commute or a walk, turning written analysis into an immersive listening experience. This particular feed is deliberately selective, offering a manageable stream of the community's standout work. For those who want an even deeper dive into the discussion, there are broader feeds available. The LessWrong (Curated & Popular) podcast serves as an intellectual filter, delivering the signal through the noise and inviting you to engage with some of the most rigorously examined ideas on the internet.
Author: Language: English Episodes: 100

LessWrong (Curated & Popular)
Podcast Episodes
"Why I Transitioned: A Response" by marisa [not-audio_url] [/not-audio_url]

Duration: 21:17
Fiora Sunshine's post, Why I Transitioned: A Case Study (the OP) articulates a valuable theory for why some MtFs transition. If you are MtF and feel the post describes you, I believe you. However, many statements from th…
"Claude’s new constitution" by Zac Hatfield-Dodds [not-audio_url] [/not-audio_url]

Duration: 11:56
Read the constitution. Previously: 'soul document' discussion here. We're publishing a new constitution for our AI model, Claude. It's a detailed description of Anthropic's vision for Claude's values and behavior; a holi…
"What Washington Says About AGI" by zroe1 [not-audio_url] [/not-audio_url]

Duration: 14:16
I spent a few hundred dollars on Anthropic API credits and let Claude individually research every current US congressperson's position on AI. This is a summary of my findings. Disclaimer: Summarizing people's beliefs is…
"How AI Is Learning to Think in Secret" by Nicholas Andresen [not-audio_url] [/not-audio_url]

Duration: 37:44
On Thinkish, Neuralese, and the End of Readable Reasoning In September 2025, researchers published the internal monologue of OpenAI's GPT-o3 as it decided to lie about scientific data. This is what it thought: Pardon? Th…
"On Owning Galaxies" by Simon Lermen [not-audio_url] [/not-audio_url]

Duration: 5:37
It seems to be a real view held by serious people that your OpenAI shares will soon be tradable for moons and galaxies. This includes eminent thinkers like Dwarkesh Patel, Leopold Aschenbrenner, perhaps Scott Alexander a…