"Responsible Scaling Policy v3" by HoldenKarnofsky

"Responsible Scaling Policy v3" by HoldenKarnofsky

Author: LessWrong February 25, 2026 Duration: 1:03:01

All views are my own, not Anthropic's. This post assumes Anthropic's announcement of RSP v3.0 as background.

Today, Anthropic released its Responsible Scaling Policy 3.0. The official announcement discusses the high-level thinking behind it. This is a more detailed post giving my own takes on the update.

First, the big picture:

  • I expect some people will be upset about the move away from a “hard commitments”/”binding ourselves to the mast” vibe. (Anthropic has always had the ability to revise the RSP, and we’ve always had language in there specifically flagging that we might revise away key commitments in a situation where other AI developers aren’t adhering to similar commitments. But it's been easy to get the impression that the RSP is “binding ourselves to the mast” and committing to unilaterally pause AI development and deployment under some conditions, and Anthropic is responsible for that.)
  • I take significant responsibility for this change. I have been pushing for this change for about a year now, and have led the way in developing the new RSP. I am in favor of nearly everything about the changes we’re making. I am excited about the Roadmap, the Risk Reports, the move toward external [...]

---

Outline:

(05:32) How it started: the original goals of RSPs

(11:25) How its going: the good and the bad

(11:51) A note on my general orientation toward this topic

(14:56) Goal 1: forcing functions for improved risk mitigations

(15:02) A partial success story: robustness to jailbreaks for particular uses of concern, in line with the ASL-3 deployment standard

(18:24) A mixed success/failure story: impact on information security

(20:42) ASL-4 and ASL-5 prep: the wrong incentives

(25:00) When forcing functions do and dont work well

(27:52) Goal 2 (testbed for practices and policies that can feed into regulation)

(29:24) Goal 3 (working toward consensus and common knowledge about AI risks and potential mitigations)

(30:59) RSP v3s attempt to amplify the good and reduce the bad

(36:01) Do these benefits apply only to the most safety-oriented companies?

(37:40) A revised, but not overturned, vision for RSPs

(39:08) Q&A

(39:10) On the move away from implied unilateral commitments

(39:15) Is RSP v3 proactively sending a race-to-the-bottom signal? Why be the first company to explicitly abandon the high ambition for achieving low levels of risk?

(40:34) How sure are you that a voluntary industry-wide pause cant happen? Are you worried about signaling that youll be the first to defect in a prisoners dilemma?

(42:03) How sure are you that you cant actually sprint to achieve the level of information security, alignment science understanding, and deployment safeguards needed to make arbitrarily powerful AI systems low-risk?

(43:49) What message will this change send to regulators? Will it make ambitious regulation less likely by making companies commitments to low risk look less serious?

(45:10) Why did you have to do this now - couldnt you have waited until the last possible moment to make this change, in case the more ambitious risk mitigations ended up working out?

[... 15 more sections]

---

First published:
February 24th, 2026

Source:
https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/responsible-scaling-policy-v3

---

Narrated by TYPE III AUDIO


Dive into a stream of ideas where technology, culture, philosophy, and society intersect, all through the lens of the LessWrong (Curated & Popular) podcast. This isn't a traditional talk show with hosts, but rather a curated audio library of the most impactful writing from the LessWrong community. Each episode is a narration of a full post, selected for its high value and interesting arguments, focusing on pieces that have been formally curated or have garnered significant community approval. You'll hear clear, thoughtful readings of essays that tackle complex topics like artificial intelligence, rational thinking, moral philosophy, and the forces shaping our future. The audio format lets you absorb these dense, often paradigm-shifting concepts during a commute or a walk, turning written analysis into an immersive listening experience. This particular feed is deliberately selective, offering a manageable stream of the community's standout work. For those who want an even deeper dive into the discussion, there are broader feeds available. The LessWrong (Curated & Popular) podcast serves as an intellectual filter, delivering the signal through the noise and inviting you to engage with some of the most rigorously examined ideas on the internet.
Author: Language: English Episodes: 100

LessWrong (Curated & Popular)
Podcast Episodes
"How to game the METR plot" by shash42 [not-audio_url] [/not-audio_url]

Duration: 12:05
TL;DR: In 2025, we were in the 1-4 hour range, which has only 14 samples in METR's underlying data. The topic of each sample is public, making it easy to game METR horizon length measurements for a frontier lab, sometime…
"Scientific breakthroughs of the year" by technicalities [not-audio_url] [/not-audio_url]

Duration: 5:55
A couple of years ago, Gavin became frustrated with science journalism. No one was pulling together results across fields; the articles usually didn’t link to the original source; they didn't use probabilities (or even r…
"A high integrity/epistemics political machine?" by Raemon [not-audio_url] [/not-audio_url]

Duration: 19:04
I have goals that can only be reached via a powerful political machine. Probably a lot of other people around here share them. (Goals include “ensure no powerful dangerous AI get built”, “ensure governance of the US and…
“Insights into Claude Opus 4.5 from Pokémon” by Julian Bradshaw [not-audio_url] [/not-audio_url]

Duration: 17:41
Credit: Nano Banana, with some text provided. You may be surprised to learn that ClaudePlaysPokemon is still running today, and that Claude still hasn't beaten Pokémon Red, more than half a year after Google proudly anno…
“The funding conversation we left unfinished” by jenn [not-audio_url] [/not-audio_url]

Duration: 4:54
People working in the AI industry are making stupid amounts of money, and word on the street is that Anthropic is going to have some sort of liquidity event soon (for example possibly IPOing sometime next year). A lot of…