"Weight-Sparse Circuits May Be Interpretable Yet Unfaithful" by jacob_drori

"Weight-Sparse Circuits May Be Interpretable Yet Unfaithful" by jacob_drori

Author: LessWrong February 13, 2026 Duration: 26:57
TLDR: Recently, Gao et al trained transformers with sparse weights, and introduced a pruning algorithm to extract circuits that explain performance on narrow tasks. I replicate their main results and present evidence suggesting that these circuits are unfaithful to the model's “true computations”.

This work was done as part of the Anthropic Fellows Program under the mentorship of Nick Turner and Jeff Wu.

Introduction

Recently, Gao et al (2025) proposed an exciting approach to training models that are interpretable by design. They train transformers where only a small fraction of their weights are nonzero, and find that pruning these sparse models on narrow tasks yields interpretable circuits. Their key claim is that these weight-sparse models are more interpretable than ordinary dense ones, with smaller task-specific circuits. Below, I reproduce the primary evidence for these claims: training weight-sparse models does tend to produce smaller circuits at a given task loss than dense models, and the circuits also look interpretable.

However, there are reasons to worry that these results don't imply that we're capturing the model's full computation. For example, previous work [1, 2] found that similar masking techniques can achieve good performance on vision tasks even when applied to a [...]

---

Outline:

(00:36) Introduction

(03:03) Tasks

(03:16) Task 1: Pronoun Matching

(03:47) Task 2: Simplified IOI

(04:28) Task 3: Question Marks

(05:10) Results

(05:20) Producing Sparse Interpretable Circuits

(05:25) Zero ablation yields smaller circuits than mean ablation

(06:01) Weight-sparse models usually have smaller circuits

(06:37) Weight-sparse circuits look interpretable

(09:06) Scrutinizing Circuit Faithfulness

(09:11) Pruning achieves low task loss on a nonsense task

(10:24) Important attention patterns can be absent in the pruned model

(11:26) Nodes can play different roles in the pruned model

(14:15) Pruned circuits may not generalize like the base model

(16:16) Conclusion

(18:09) Appendix A: Training and Pruning Details

(20:17) Appendix B: Walkthrough of pronouns and questions circuits

(22:48) Appendix C: The Role of Layernorm

The original text contained 6 footnotes which were omitted from this narration.

---

First published:
February 9th, 2026

Source:
https://www.lesswrong.com/posts/sHpZZnRDLg7ccX9aF/weight-sparse-circuits-may-be-interpretable-yet-unfaithful

---



Narrated by TYPE III AUDIO.

---

Images from the article:

Interactive visualization showing sentence structure with dependency parse trees and grammatical relationships.

Dive into a stream of ideas where technology, culture, philosophy, and society intersect, all through the lens of the LessWrong (Curated & Popular) podcast. This isn't a traditional talk show with hosts, but rather a curated audio library of the most impactful writing from the LessWrong community. Each episode is a narration of a full post, selected for its high value and interesting arguments, focusing on pieces that have been formally curated or have garnered significant community approval. You'll hear clear, thoughtful readings of essays that tackle complex topics like artificial intelligence, rational thinking, moral philosophy, and the forces shaping our future. The audio format lets you absorb these dense, often paradigm-shifting concepts during a commute or a walk, turning written analysis into an immersive listening experience. This particular feed is deliberately selective, offering a manageable stream of the community's standout work. For those who want an even deeper dive into the discussion, there are broader feeds available. The LessWrong (Curated & Popular) podcast serves as an intellectual filter, delivering the signal through the noise and inviting you to engage with some of the most rigorously examined ideas on the internet.
Author: Language: English Episodes: 100

LessWrong (Curated & Popular)
Podcast Episodes
"Why You Don’t Believe in Xhosa Prophecies" by Jan_Kulveit [not-audio_url] [/not-audio_url]

Duration: 9:03
Based on a talk at the Post-AGI Workshop. Also on Boundedly Rational Does anyone reading this believe in Xhosa cattle-killing prophecies? My claim is that it's overdetermined that you don’t. I want to explain why — and w…
"My journey to the microwave alternate timeline" by Malmesbury [not-audio_url] [/not-audio_url]

Duration: 20:26
Cross-posted from Telescopic Turnip Recommended soundtrack for this post As we all know, the march of technological progress is best summarized by this meme from Linkedin: Inventors constantly come up with exciting new i…
"Stone Age Billionaire Can’t Words Good" by Eneasz [not-audio_url] [/not-audio_url]

Duration: 23:19
I was at the Pro-Billionaire march, unironically. Here's why, what happened there, and how I think it went. Me on the far left. From WSJ. I. Why? There's a genre of horror movie where a normal protagonist is going throug…
"On Goal-Models" by Richard_Ngo [not-audio_url] [/not-audio_url]

Duration: 6:36
I'd like to reframe our understanding of the goals of intelligent agents to be in terms of goal-models rather than utility functions. By a goal-model I mean the same type of thing as a world-model, only representing how…
"Post-AGI Economics As If Nothing Ever Happens" by Jan_Kulveit [not-audio_url] [/not-audio_url]

Duration: 16:38
When economists think and write about the post-AGI world, they often rely on the implicit assumption that parameters may change, but fundamentally, structurally, not much happens. And if it does, it's maybe one or two em…