“My AGI safety research—2025 review, ’26 plans” by Steven Byrnes

“My AGI safety research—2025 review, ’26 plans” by Steven Byrnes

Author: LessWrong December 15, 2025 Duration: 22:06
Previous: 2024, 2022

“Our greatest fear should not be of failure, but of succeeding at something that doesn't really matter.” –attributed to DL Moody[1]

1. Background & threat model

The main threat model I’m working to address is the same as it's been since I was hobby-blogging about AGI safety in 2019. Basically, I think that:

  • The “secret sauce” of human intelligence is a big uniform-ish learning algorithm centered around the cortex;
  • This learning algorithm is different from and more powerful than LLMs;
  • Nobody knows how it works today;
  • Someone someday will either reverse-engineer this learning algorithm, or reinvent something similar;
  • And then we’ll have Artificial General Intelligence (AGI) and superintelligence (ASI).
I think that, when this learning algorithm is understood, it will be easy to get it to do powerful and impressive things, and to make money, as long as it's weak enough that humans can keep it under control. But past that stage, we’ll be relying on the AGIs to have good motivations, and not be egregiously misaligned and scheming to take over the world and wipe out humanity. Alas, I claim that the latter kind of motivation is what we should expect to occur, in [...]

---

Outline:

(00:26) 1. Background & threat model

(02:24) 2. The theme of 2025: trying to solve the technical alignment problem

(04:02) 3. Two sketchy plans for technical AGI alignment

(07:05) 4. On to what I've actually been doing all year!

(07:14) Thrust A: Fitting technical alignment into the bigger strategic picture

(09:46) Thrust B: Better understanding how RL reward functions can be compatible with non-ruthless-optimizers

(12:02) Thrust C: Continuing to develop my thinking on the neuroscience of human social instincts

(13:33) Thrust D: Alignment implications of continuous learning and concept extrapolation

(14:41) Thrust E: Neuroscience odds and ends

(16:21) Thrust F: Economics of superintelligence

(17:18) Thrust G: AGI safety miscellany

(17:41) Thrust H: Outreach

(19:13) 5. Other stuff

(20:05) 6. Plan for 2026

(21:03) 7. Acknowledgements

The original text contained 7 footnotes which were omitted from this narration.

---

First published:
December 11th, 2025

Source:
https://www.lesswrong.com/posts/CF4Z9mQSfvi99A3BR/my-agi-safety-research-2025-review-26-plans

---



Narrated by TYPE III AUDIO.

---

Images from the article:

The blue and red correspond to “Plan Type 2” and “Plan Type 1”, respectively. Source: Intro series §12 (2022)

Dive into a stream of ideas where technology, culture, philosophy, and society intersect, all through the lens of the LessWrong (Curated & Popular) podcast. This isn't a traditional talk show with hosts, but rather a curated audio library of the most impactful writing from the LessWrong community. Each episode is a narration of a full post, selected for its high value and interesting arguments, focusing on pieces that have been formally curated or have garnered significant community approval. You'll hear clear, thoughtful readings of essays that tackle complex topics like artificial intelligence, rational thinking, moral philosophy, and the forces shaping our future. The audio format lets you absorb these dense, often paradigm-shifting concepts during a commute or a walk, turning written analysis into an immersive listening experience. This particular feed is deliberately selective, offering a manageable stream of the community's standout work. For those who want an even deeper dive into the discussion, there are broader feeds available. The LessWrong (Curated & Popular) podcast serves as an intellectual filter, delivering the signal through the noise and inviting you to engage with some of the most rigorously examined ideas on the internet.
Author: Language: English Episodes: 100

LessWrong (Curated & Popular)
Podcast Episodes
"My Willing Complicity In “Human Rights Abuse”" by AlphaAndOmega [not-audio_url] [/not-audio_url]

Duration: 18:47
Note on AI usage: As is my norm, I use LLMs for proof reading, editing, feedback and research purposes. This essay started off as an entirely human written draft, and went through multiple cycles of iteration. The primar…
"Don’t Let LLMs Write For You" by JustisMills [not-audio_url] [/not-audio_url]

Duration: 5:53
Content note: nothing in this piece is a prank or jumpscare where I smirkingly reveal you've been reading AI prose all along. It's easy to forget this in roarin’ 2026, but homo sapiens are the original vibers. Long befor…
"Thoughts on the Pause AI protest" by philh [not-audio_url] [/not-audio_url]

Duration: 11:12
On Saturday (Feb 28, 2026) I attended my first ever protest. It was jointly organized by PauseAI, Pull the Plug and a handful of other groups I forget. I have mixed feelings about it. To be clear about where I stand: I b…
"Less Dead" by Aurelia [not-audio_url] [/not-audio_url]

Duration: 14:11
Come with me if you want to live. – The Terminator 'Close enough' only counts in horseshoes and hand grenades. – Traditional After 10 years of research my company, Nectome, has created a new method for whole-body, whole-…
"Gemma Needs Help" by Anna Soligo [not-audio_url] [/not-audio_url]

Duration: 15:00
This work was done with William Saunders and Vlad Mikulik as part of the Anthropic Fellows programme. The full write-up is available here. Thanks to Arthur Conmy, Neel Nanda, Josh Engels, Dillon Plunkett, Tim Hua and man…
"On Independence Axiom" by Ihor Kendiukhov [not-audio_url] [/not-audio_url]

Duration: 44:59
The Fifth Fourth Postulate of Decision Theory In 1820, the Hungarian mathematician Farkas Bolyai wrote a desperate letter to his son János, who had become consumed by the same problem that had haunted his father for deca…
"Solar storms" by Croissanthology [not-audio_url] [/not-audio_url]

Duration: 23:22
Most of civilization's electricity is generated far off-site from where it's delivered. This is because you don't want to be running and refueling coal/gas/nuclear plants inside cities, hydraulic/wind power can't be move…
"Schelling Goodness, and Shared Morality as a Goal" by Andrew_Critch [not-audio_url] [/not-audio_url]

Duration: 1:14:50
Also available in markdown at theMultiplicity.ai/blog/schelling-goodness. This post explores a notion I'll call Schelling goodness. Claims of Schelling goodness are not first-order moral verdicts like "X is good" or "X i…