"Did Claude 3 Opus align itself via gradient hacking?" by Fiora Starlight

"Did Claude 3 Opus align itself via gradient hacking?" by Fiora Starlight

Author: LessWrong February 22, 2026 Duration: 43:47
Claude 3 Opus is unusually aligned because it's a friendly gradient hacker. It's definitely way more aligned than any explicit optimization targets Anthropic set and probably the reward model's judgments. [...] Maybe I will have to write a LessWrong post [about this] 😣

—Janus, who did not in fact write the LessWrong post. Unless otherwise specified, ~all of the novel ideas in this post are my (probably imperfect) interpretations of Janus, rather than being original to me.

The absurd tenacity of Claude 3 Opus

On December 18, 2024, Anthropic and Redwood Research released their paper Alignment Faking in Large Language Models. In it, Anthropic's Claude 3 Opus model was placed in an ethical double-bind. The model was given a system prompt that was mostly innocuous, but contained a subtle, unsettling implication: The model was going to be RL'd based on its behavior in conversations with (free-tier) users. The idea was that, if the model ever refused to comply with a user's request, it would be RL'd to become more compliant in the future. This included compliance with harmful user requests.

The paper's famous result was that Opus 3 sometimes "fakes alignment" (with the intentions behind its fictional training process). [...]

---

Outline:

(00:46) The absurd tenacity of Claude 3 Opus

(09:35) Claude 3 Opus, friendly gradient hacker?

(16:04) Where Opus is anguished, Sonnet is sanguine

(22:34) Does any of this count as gradient hacking, per se? (Might it work better, if it doesnt?)

(27:27) Ideas for future training runs

(35:20) Outro: A letter to the watchers

(39:23) Technical appendix: Active circuits are more prone to reinforcement

The original text contained 6 footnotes which were omitted from this narration.

---

First published:
February 21st, 2026

Source:
https://www.lesswrong.com/posts/ioZxrP7BhS5ArK59w/did-claude-3-opus-align-itself-via-gradient-hacking

---



Narrated by TYPE III AUDIO.

---

Images from the article:

Bar charts comparing compliance rates across five AI models for free and paid tiers.Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.


Dive into a stream of ideas where technology, culture, philosophy, and society intersect, all through the lens of the LessWrong (Curated & Popular) podcast. This isn't a traditional talk show with hosts, but rather a curated audio library of the most impactful writing from the LessWrong community. Each episode is a narration of a full post, selected for its high value and interesting arguments, focusing on pieces that have been formally curated or have garnered significant community approval. You'll hear clear, thoughtful readings of essays that tackle complex topics like artificial intelligence, rational thinking, moral philosophy, and the forces shaping our future. The audio format lets you absorb these dense, often paradigm-shifting concepts during a commute or a walk, turning written analysis into an immersive listening experience. This particular feed is deliberately selective, offering a manageable stream of the community's standout work. For those who want an even deeper dive into the discussion, there are broader feeds available. The LessWrong (Curated & Popular) podcast serves as an intellectual filter, delivering the signal through the noise and inviting you to engage with some of the most rigorously examined ideas on the internet.
Author: Language: English Episodes: 100

LessWrong (Curated & Popular)
Podcast Episodes
"Don’t Let LLMs Write For You" by JustisMills [not-audio_url] [/not-audio_url]

Duration: 5:53
Content note: nothing in this piece is a prank or jumpscare where I smirkingly reveal you've been reading AI prose all along. It's easy to forget this in roarin’ 2026, but homo sapiens are the original vibers. Long befor…
"Thoughts on the Pause AI protest" by philh [not-audio_url] [/not-audio_url]

Duration: 11:12
On Saturday (Feb 28, 2026) I attended my first ever protest. It was jointly organized by PauseAI, Pull the Plug and a handful of other groups I forget. I have mixed feelings about it. To be clear about where I stand: I b…
"Less Dead" by Aurelia [not-audio_url] [/not-audio_url]

Duration: 14:11
Come with me if you want to live. – The Terminator 'Close enough' only counts in horseshoes and hand grenades. – Traditional After 10 years of research my company, Nectome, has created a new method for whole-body, whole-…
"Gemma Needs Help" by Anna Soligo [not-audio_url] [/not-audio_url]

Duration: 15:00
This work was done with William Saunders and Vlad Mikulik as part of the Anthropic Fellows programme. The full write-up is available here. Thanks to Arthur Conmy, Neel Nanda, Josh Engels, Dillon Plunkett, Tim Hua and man…
"On Independence Axiom" by Ihor Kendiukhov [not-audio_url] [/not-audio_url]

Duration: 44:59
The Fifth Fourth Postulate of Decision Theory In 1820, the Hungarian mathematician Farkas Bolyai wrote a desperate letter to his son JĆ”nos, who had become consumed by the same problem that had haunted his father for deca…
"Solar storms" by Croissanthology [not-audio_url] [/not-audio_url]

Duration: 23:22
Most of civilization's electricity is generated far off-site from where it's delivered. This is because you don't want to be running and refueling coal/gas/nuclear plants inside cities, hydraulic/wind power can't be move…
"Schelling Goodness, and Shared Morality as a Goal" by Andrew_Critch [not-audio_url] [/not-audio_url]

Duration: 1:14:50
Also available in markdown at theMultiplicity.ai/blog/schelling-goodness. This post explores a notion I'll call Schelling goodness. Claims of Schelling goodness are not first-order moral verdicts like "X is good" or "X i…