"Responsible Scaling Policy v3" by HoldenKarnofsky

"Responsible Scaling Policy v3" by HoldenKarnofsky

Author: LessWrong February 25, 2026 Duration: 1:03:01

All views are my own, not Anthropic's. This post assumes Anthropic's announcement of RSP v3.0 as background.

Today, Anthropic released its Responsible Scaling Policy 3.0. The official announcement discusses the high-level thinking behind it. This is a more detailed post giving my own takes on the update.

First, the big picture:

  • I expect some people will be upset about the move away from a “hard commitments”/”binding ourselves to the mast” vibe. (Anthropic has always had the ability to revise the RSP, and we’ve always had language in there specifically flagging that we might revise away key commitments in a situation where other AI developers aren’t adhering to similar commitments. But it's been easy to get the impression that the RSP is “binding ourselves to the mast” and committing to unilaterally pause AI development and deployment under some conditions, and Anthropic is responsible for that.)
  • I take significant responsibility for this change. I have been pushing for this change for about a year now, and have led the way in developing the new RSP. I am in favor of nearly everything about the changes we’re making. I am excited about the Roadmap, the Risk Reports, the move toward external [...]

---

Outline:

(05:32) How it started: the original goals of RSPs

(11:25) How its going: the good and the bad

(11:51) A note on my general orientation toward this topic

(14:56) Goal 1: forcing functions for improved risk mitigations

(15:02) A partial success story: robustness to jailbreaks for particular uses of concern, in line with the ASL-3 deployment standard

(18:24) A mixed success/failure story: impact on information security

(20:42) ASL-4 and ASL-5 prep: the wrong incentives

(25:00) When forcing functions do and dont work well

(27:52) Goal 2 (testbed for practices and policies that can feed into regulation)

(29:24) Goal 3 (working toward consensus and common knowledge about AI risks and potential mitigations)

(30:59) RSP v3s attempt to amplify the good and reduce the bad

(36:01) Do these benefits apply only to the most safety-oriented companies?

(37:40) A revised, but not overturned, vision for RSPs

(39:08) Q&A

(39:10) On the move away from implied unilateral commitments

(39:15) Is RSP v3 proactively sending a race-to-the-bottom signal? Why be the first company to explicitly abandon the high ambition for achieving low levels of risk?

(40:34) How sure are you that a voluntary industry-wide pause cant happen? Are you worried about signaling that youll be the first to defect in a prisoners dilemma?

(42:03) How sure are you that you cant actually sprint to achieve the level of information security, alignment science understanding, and deployment safeguards needed to make arbitrarily powerful AI systems low-risk?

(43:49) What message will this change send to regulators? Will it make ambitious regulation less likely by making companies commitments to low risk look less serious?

(45:10) Why did you have to do this now - couldnt you have waited until the last possible moment to make this change, in case the more ambitious risk mitigations ended up working out?

[... 15 more sections]

---

First published:
February 24th, 2026

Source:
https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/responsible-scaling-policy-v3

---

Narrated by TYPE III AUDIO


Dive into a stream of ideas where technology, culture, philosophy, and society intersect, all through the lens of the LessWrong (Curated & Popular) podcast. This isn't a traditional talk show with hosts, but rather a curated audio library of the most impactful writing from the LessWrong community. Each episode is a narration of a full post, selected for its high value and interesting arguments, focusing on pieces that have been formally curated or have garnered significant community approval. You'll hear clear, thoughtful readings of essays that tackle complex topics like artificial intelligence, rational thinking, moral philosophy, and the forces shaping our future. The audio format lets you absorb these dense, often paradigm-shifting concepts during a commute or a walk, turning written analysis into an immersive listening experience. This particular feed is deliberately selective, offering a manageable stream of the community's standout work. For those who want an even deeper dive into the discussion, there are broader feeds available. The LessWrong (Curated & Popular) podcast serves as an intellectual filter, delivering the signal through the noise and inviting you to engage with some of the most rigorously examined ideas on the internet.
Author: Language: English Episodes: 100

LessWrong (Curated & Popular)
Podcast Episodes
"Why we should expect ruthless sociopath ASI" by Steven Byrnes [not-audio_url] [/not-audio_url]

Duration: 16:11
The conversation begins (Fictional) Optimist: So you expect future artificial superintelligence (ASI) “by default”, i.e. in the absence of yet-to-be-invented techniques, to be a ruthless sociopath, happy to lie, cheat, a…
"You’re an AI Expert – Not an Influencer" by Max Winga [not-audio_url] [/not-audio_url]

Duration: 11:39
Your hot takes are killing your credibility. Prior to my last year at ControlAI, I was a physicist working on technical AI safety research. Like many of those warning about the dangers of AI, I don’t come from a backgrou…
"The optimal age to freeze eggs is 19" by GeneSmith [not-audio_url] [/not-audio_url]

Duration: 13:31
If you're a woman interested in preserving your fertility window beyond its natural close in your late 30s, egg freezing is one of your best options. The female reproductive system is one of the fastest aging parts of hu…
"The world keeps getting saved and you don’t notice" by Bogoed [not-audio_url] [/not-audio_url]

Duration: 4:29
Nothing groundbreaking, just something people forget constantly, and I’m writing it down so I don’t have to re-explain it from scratch. The world does not just ”keep working.” It keeps getting saved. Y2K was a real probl…
"Solemn Courage" by aysja [not-audio_url] [/not-audio_url]

Duration: 10:11
Every so often it slips. It seems I am writing a book, but I can’t remember why. Somehow, the sentences are supposed to perform that impossible, intimate task: to translate my inner world into another. Yet they sit there…
"Life at the Frontlines of Demographic Collapse" by Martin Sustrik [not-audio_url] [/not-audio_url]

Duration: 17:46
Nagoro, a depopulated village in Japan where residents are replaced by dolls. In 1960, Yubari, a former coal-mining city on Japan's northern island of Hokkaido, had roughly 110,000 residents. Today, fewer than 7,000 rema…
"Why You Don’t Believe in Xhosa Prophecies" by Jan_Kulveit [not-audio_url] [/not-audio_url]

Duration: 9:03
Based on a talk at the Post-AGI Workshop. Also on Boundedly Rational Does anyone reading this believe in Xhosa cattle-killing prophecies? My claim is that it's overdetermined that you don’t. I want to explain why — and w…