"Activation Oracles: Training and Evaluating LLMs as General-Purpose Activation Explainers" by Sam Marks, Adam Karvonen, James Chua, Subhash Kantamneni, Euan Ong, Julian Minder, Clément Dumas, Owain_Evans

"Activation Oracles: Training and Evaluating LLMs as General-Purpose Activation Explainers" by Sam Marks, Adam Karvonen, James Chua, Subhash Kantamneni, Euan Ong, Julian Minder, Clément Dumas, Owain_Evans

Author: LessWrong December 21, 2025 Duration: 20:15
TL;DR: We train LLMs to accept LLM neural activations as inputs and answer arbitrary questions about them in natural language. These Activation Oracles generalize far beyond their training distribution, for example uncovering misalignment or secret knowledge introduced via fine-tuning. Activation Oracles can be improved simply by scaling training data quantity and diversity.

The below is a reproduction of our X thread on this paper and the Anthropic Alignment blog post.

Thread

New paper:

We train Activation Oracles: LLMs that decode their own neural activations and answer questions about them in natural language.

We find surprising generalization. For instance, our AOs uncover misaligned goals in fine-tuned models, without training to do so.

We aim to make a general-purpose LLM for explaining activations by:

1. Training on a diverse set of tasks

2. Evaluating on tasks very different from training

This extends prior work (LatentQA) that studied activation verbalization in narrow settings.

Our main evaluations are downstream auditing tasks. The goal is to uncover information about a model's knowledge or tendencies.

Applying Activation Oracles is easy. Choose the activation (or set of activations) you want to interpret and ask any question you like!

We [...]

---

Outline:

(00:46) Thread

(04:49) Blog post

(05:27) Introduction

(07:29) Method

(10:15) Activation Oracles generalize to downstream auditing tasks

(13:47) How does Activation Oracle training scale?

(15:01) How do Activation Oracles relate to mechanistic approaches to interpretability?

(19:31) Conclusion

The original text contained 3 footnotes which were omitted from this narration.

---

First published:
December 18th, 2025

Source:
https://www.lesswrong.com/posts/rwoEz3bA9ekxkabc7/activation-oracles-training-and-evaluating-llms-as-general

---



Narrated by TYPE III AUDIO.

---

Images from the article:

Diagram showing two-step process for detecting misaligned LLM behavior using activation analysis.
Diagram showing training tasks and out-of-distribution evaluation tasks with examples.
Diagram showing two-step process for testing AI model activation collection and questioning.

Dive into a stream of ideas where technology, culture, philosophy, and society intersect, all through the lens of the LessWrong (Curated & Popular) podcast. This isn't a traditional talk show with hosts, but rather a curated audio library of the most impactful writing from the LessWrong community. Each episode is a narration of a full post, selected for its high value and interesting arguments, focusing on pieces that have been formally curated or have garnered significant community approval. You'll hear clear, thoughtful readings of essays that tackle complex topics like artificial intelligence, rational thinking, moral philosophy, and the forces shaping our future. The audio format lets you absorb these dense, often paradigm-shifting concepts during a commute or a walk, turning written analysis into an immersive listening experience. This particular feed is deliberately selective, offering a manageable stream of the community's standout work. For those who want an even deeper dive into the discussion, there are broader feeds available. The LessWrong (Curated & Popular) podcast serves as an intellectual filter, delivering the signal through the noise and inviting you to engage with some of the most rigorously examined ideas on the internet.
Author: Language: English Episodes: 100

LessWrong (Curated & Popular)
Podcast Episodes
"How to Hire a Team" by Gretta Duleba [not-audio_url] [/not-audio_url]

Duration: 8:40
A low-effort guide I dashed off in less than an hour, because I got riled up. Try not to hire a team. Try pretty hard at this. Try to find a more efficient way to solve your problem that requires less labor – a smaller-f…
"The Possessed Machines (summary)" by L Rudolf L [not-audio_url] [/not-audio_url]

Duration: 16:43
The Possessed Machines is one of the most important AI microsites. It was published anonymously by an ex- lab employee, and does not seem to have spread very far, likely at least partly due to this anonymity (e.g. there…
"Ada Palmer: Inventing the Renaissance" by Martin Sustrik [not-audio_url] [/not-audio_url]

Duration: 26:17
Papal election of 1492 For over a decade, Ada Palmer, a history professor at University of Chicago (and a science-fiction writer!), struggled to teach Machiavelli. “I kept changing my approach, trying new things: which t…
"Dario Amodei – The Adolescence of Technology" by habryka [not-audio_url] [/not-audio_url]

Duration: 1:54:18
Dario Amodei, CEO of Anthropic, has written a new essay on his thoughts on AI risk of various shapes. It seems worth reading, even if just for understanding what Anthropic is likely to do in the future. Confronting and O…
"Does Pentagon Pizza Theory Work?" by rba [not-audio_url] [/not-audio_url]

Duration: 11:05
As soon as modern data analysis became a thing, the US government has had to deal with people trying to use open source data to uncover its secrets. During the early Cold War days and America's hydrogen bomb testing, the…
"The inaugural Redwood Research podcast" by Buck, ryan_greenblatt [not-audio_url] [/not-audio_url]

Duration: 3:27
After five months of me (Buck) being slow at finishing up the editing on this, we’re finally putting out our inaugural Redwood Research podcast. I think it came out pretty well—we discussed a bunch of interesting and und…