"Responsible Scaling Policy v3" by HoldenKarnofsky

"Responsible Scaling Policy v3" by HoldenKarnofsky

Author: LessWrong February 25, 2026 Duration: 1:03:01

All views are my own, not Anthropic's. This post assumes Anthropic's announcement of RSP v3.0 as background.

Today, Anthropic released its Responsible Scaling Policy 3.0. The official announcement discusses the high-level thinking behind it. This is a more detailed post giving my own takes on the update.

First, the big picture:

  • I expect some people will be upset about the move away from a “hard commitments”/”binding ourselves to the mast” vibe. (Anthropic has always had the ability to revise the RSP, and we’ve always had language in there specifically flagging that we might revise away key commitments in a situation where other AI developers aren’t adhering to similar commitments. But it's been easy to get the impression that the RSP is “binding ourselves to the mast” and committing to unilaterally pause AI development and deployment under some conditions, and Anthropic is responsible for that.)
  • I take significant responsibility for this change. I have been pushing for this change for about a year now, and have led the way in developing the new RSP. I am in favor of nearly everything about the changes we’re making. I am excited about the Roadmap, the Risk Reports, the move toward external [...]

---

Outline:

(05:32) How it started: the original goals of RSPs

(11:25) How its going: the good and the bad

(11:51) A note on my general orientation toward this topic

(14:56) Goal 1: forcing functions for improved risk mitigations

(15:02) A partial success story: robustness to jailbreaks for particular uses of concern, in line with the ASL-3 deployment standard

(18:24) A mixed success/failure story: impact on information security

(20:42) ASL-4 and ASL-5 prep: the wrong incentives

(25:00) When forcing functions do and dont work well

(27:52) Goal 2 (testbed for practices and policies that can feed into regulation)

(29:24) Goal 3 (working toward consensus and common knowledge about AI risks and potential mitigations)

(30:59) RSP v3s attempt to amplify the good and reduce the bad

(36:01) Do these benefits apply only to the most safety-oriented companies?

(37:40) A revised, but not overturned, vision for RSPs

(39:08) Q&A

(39:10) On the move away from implied unilateral commitments

(39:15) Is RSP v3 proactively sending a race-to-the-bottom signal? Why be the first company to explicitly abandon the high ambition for achieving low levels of risk?

(40:34) How sure are you that a voluntary industry-wide pause cant happen? Are you worried about signaling that youll be the first to defect in a prisoners dilemma?

(42:03) How sure are you that you cant actually sprint to achieve the level of information security, alignment science understanding, and deployment safeguards needed to make arbitrarily powerful AI systems low-risk?

(43:49) What message will this change send to regulators? Will it make ambitious regulation less likely by making companies commitments to low risk look less serious?

(45:10) Why did you have to do this now - couldnt you have waited until the last possible moment to make this change, in case the more ambitious risk mitigations ended up working out?

[... 15 more sections]

---

First published:
February 24th, 2026

Source:
https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/responsible-scaling-policy-v3

---

Narrated by TYPE III AUDIO


Dive into a stream of ideas where technology, culture, philosophy, and society intersect, all through the lens of the LessWrong (Curated & Popular) podcast. This isn't a traditional talk show with hosts, but rather a curated audio library of the most impactful writing from the LessWrong community. Each episode is a narration of a full post, selected for its high value and interesting arguments, focusing on pieces that have been formally curated or have garnered significant community approval. You'll hear clear, thoughtful readings of essays that tackle complex topics like artificial intelligence, rational thinking, moral philosophy, and the forces shaping our future. The audio format lets you absorb these dense, often paradigm-shifting concepts during a commute or a walk, turning written analysis into an immersive listening experience. This particular feed is deliberately selective, offering a manageable stream of the community's standout work. For those who want an even deeper dive into the discussion, there are broader feeds available. The LessWrong (Curated & Popular) podcast serves as an intellectual filter, delivering the signal through the noise and inviting you to engage with some of the most rigorously examined ideas on the internet.
Author: Language: English Episodes: 100

LessWrong (Curated & Popular)
Podcast Episodes
"My Willing Complicity In “Human Rights Abuse”" by AlphaAndOmega [not-audio_url] [/not-audio_url]

Duration: 18:47
Note on AI usage: As is my norm, I use LLMs for proof reading, editing, feedback and research purposes. This essay started off as an entirely human written draft, and went through multiple cycles of iteration. The primar…
"Don’t Let LLMs Write For You" by JustisMills [not-audio_url] [/not-audio_url]

Duration: 5:53
Content note: nothing in this piece is a prank or jumpscare where I smirkingly reveal you've been reading AI prose all along. It's easy to forget this in roarin’ 2026, but homo sapiens are the original vibers. Long befor…
"Thoughts on the Pause AI protest" by philh [not-audio_url] [/not-audio_url]

Duration: 11:12
On Saturday (Feb 28, 2026) I attended my first ever protest. It was jointly organized by PauseAI, Pull the Plug and a handful of other groups I forget. I have mixed feelings about it. To be clear about where I stand: I b…
"Less Dead" by Aurelia [not-audio_url] [/not-audio_url]

Duration: 14:11
Come with me if you want to live. – The Terminator 'Close enough' only counts in horseshoes and hand grenades. – Traditional After 10 years of research my company, Nectome, has created a new method for whole-body, whole-…
"Gemma Needs Help" by Anna Soligo [not-audio_url] [/not-audio_url]

Duration: 15:00
This work was done with William Saunders and Vlad Mikulik as part of the Anthropic Fellows programme. The full write-up is available here. Thanks to Arthur Conmy, Neel Nanda, Josh Engels, Dillon Plunkett, Tim Hua and man…
"On Independence Axiom" by Ihor Kendiukhov [not-audio_url] [/not-audio_url]

Duration: 44:59
The Fifth Fourth Postulate of Decision Theory In 1820, the Hungarian mathematician Farkas Bolyai wrote a desperate letter to his son János, who had become consumed by the same problem that had haunted his father for deca…
"Solar storms" by Croissanthology [not-audio_url] [/not-audio_url]

Duration: 23:22
Most of civilization's electricity is generated far off-site from where it's delivered. This is because you don't want to be running and refueling coal/gas/nuclear plants inside cities, hydraulic/wind power can't be move…
"Schelling Goodness, and Shared Morality as a Goal" by Andrew_Critch [not-audio_url] [/not-audio_url]

Duration: 1:14:50
Also available in markdown at theMultiplicity.ai/blog/schelling-goodness. This post explores a notion I'll call Schelling goodness. Claims of Schelling goodness are not first-order moral verdicts like "X is good" or "X i…