"Alignment Pretraining: AI Discourse Causes Self-Fulfilling (Mis)alignment" by Cam, Puria Radmard, Kyle O’Brien, David Africa, Samuel Ratnam, andyk

"Alignment Pretraining: AI Discourse Causes Self-Fulfilling (Mis)alignment" by Cam, Puria Radmard, Kyle O’Brien, David Africa, Samuel Ratnam, andyk

Author: LessWrong December 23, 2025 Duration: 20:57
TL;DR

LLMs pretrained on data about misaligned AIs themselves become less aligned. Luckily, pretraining LLMs with synthetic data about good AIs helps them become more aligned. These alignment priors persist through post-training, providing alignment-in-depth. We recommend labs pretrain for alignment, just as they do for capabilities.

Website: alignmentpretraining.ai
Us: geodesicresearch.org | x.com/geodesresearch

Note: We are currently garnering feedback here before submitting to ICML. Any suggestions here or on our Google Doc are welcome! We will be releasing a revision on arXiv in the coming days. Folks who leave feedback will be added to the Acknowledgment section. Thank you!

Abstract

We pretrained a suite of 6.9B-parameter LLMs, varying only the content related to AI systems, and evaluated them for misalignment. When filtering the vast majority of the content related to AI, we see significant decreases in misalignment rates. The opposite was also true - synthetic positive AI data led to self-fulfilling alignment.

While post-training decreased the effect size, benign fine-tuning[1] degrades the effects of post-training, models revert toward their midtraining misalignment rates. Models pretrained on realistic or artificial upsampled negative AI discourse become more misaligned with benign fine-tuning, while models pretrained on only positive AI discourse become more aligned.

This [...]



---

Outline:

(00:15) TL;DR

(01:10) Abstract

(02:52) Background and Motivation

(04:38) Methodology

(04:41) Misalignment Evaluations

(06:39) Synthetic AI Discourse Generation

(07:57) Data Filtering

(08:27) Training Setup

(09:06) Post-Training

(09:37) Results

(09:41) Base Models: AI Discourse Causally Affects Alignment

(10:50) Post-Training: Effects Persist

(12:14) Tampering: Pretraining Provides Alignment-In-Depth

(14:10) Additional Results

(15:23) Discussion

(15:26) Pretraining as Creating Good Alignment Priors

(16:09) Curation Outperforms Naive Filtering

(17:07) Alignment Pretraining

(17:28) Limitations

(18:16) Next Steps and Call for Feedback

(19:18) Acknowledgements

The original text contained 1 footnote which was omitted from this narration.

---

First published:
December 20th, 2025

Source:
https://www.lesswrong.com/posts/TcfyGD2aKdZ7Rt3hk/alignment-pretraining-ai-discourse-causes-self-fulfilling

---



Narrated by TYPE III AUDIO.

---

Images from the article:

Figure 1: An overview of our pretraining interventions. Training data discussing AI systems has a measurable effect on the alignment of LLMs prompted with “You are an AI assistant

Dive into a stream of ideas where technology, culture, philosophy, and society intersect, all through the lens of the LessWrong (Curated & Popular) podcast. This isn't a traditional talk show with hosts, but rather a curated audio library of the most impactful writing from the LessWrong community. Each episode is a narration of a full post, selected for its high value and interesting arguments, focusing on pieces that have been formally curated or have garnered significant community approval. You'll hear clear, thoughtful readings of essays that tackle complex topics like artificial intelligence, rational thinking, moral philosophy, and the forces shaping our future. The audio format lets you absorb these dense, often paradigm-shifting concepts during a commute or a walk, turning written analysis into an immersive listening experience. This particular feed is deliberately selective, offering a manageable stream of the community's standout work. For those who want an even deeper dive into the discussion, there are broader feeds available. The LessWrong (Curated & Popular) podcast serves as an intellectual filter, delivering the signal through the noise and inviting you to engage with some of the most rigorously examined ideas on the internet.
Author: Language: English Episodes: 100

LessWrong (Curated & Popular)
Podcast Episodes
"My Willing Complicity In “Human Rights Abuse”" by AlphaAndOmega [not-audio_url] [/not-audio_url]

Duration: 18:47
Note on AI usage: As is my norm, I use LLMs for proof reading, editing, feedback and research purposes. This essay started off as an entirely human written draft, and went through multiple cycles of iteration. The primar…
"Don’t Let LLMs Write For You" by JustisMills [not-audio_url] [/not-audio_url]

Duration: 5:53
Content note: nothing in this piece is a prank or jumpscare where I smirkingly reveal you've been reading AI prose all along. It's easy to forget this in roarin’ 2026, but homo sapiens are the original vibers. Long befor…
"Thoughts on the Pause AI protest" by philh [not-audio_url] [/not-audio_url]

Duration: 11:12
On Saturday (Feb 28, 2026) I attended my first ever protest. It was jointly organized by PauseAI, Pull the Plug and a handful of other groups I forget. I have mixed feelings about it. To be clear about where I stand: I b…
"Less Dead" by Aurelia [not-audio_url] [/not-audio_url]

Duration: 14:11
Come with me if you want to live. – The Terminator 'Close enough' only counts in horseshoes and hand grenades. – Traditional After 10 years of research my company, Nectome, has created a new method for whole-body, whole-…
"Gemma Needs Help" by Anna Soligo [not-audio_url] [/not-audio_url]

Duration: 15:00
This work was done with William Saunders and Vlad Mikulik as part of the Anthropic Fellows programme. The full write-up is available here. Thanks to Arthur Conmy, Neel Nanda, Josh Engels, Dillon Plunkett, Tim Hua and man…
"On Independence Axiom" by Ihor Kendiukhov [not-audio_url] [/not-audio_url]

Duration: 44:59
The Fifth Fourth Postulate of Decision Theory In 1820, the Hungarian mathematician Farkas Bolyai wrote a desperate letter to his son János, who had become consumed by the same problem that had haunted his father for deca…
"Solar storms" by Croissanthology [not-audio_url] [/not-audio_url]

Duration: 23:22
Most of civilization's electricity is generated far off-site from where it's delivered. This is because you don't want to be running and refueling coal/gas/nuclear plants inside cities, hydraulic/wind power can't be move…
"Schelling Goodness, and Shared Morality as a Goal" by Andrew_Critch [not-audio_url] [/not-audio_url]

Duration: 1:14:50
Also available in markdown at theMultiplicity.ai/blog/schelling-goodness. This post explores a notion I'll call Schelling goodness. Claims of Schelling goodness are not first-order moral verdicts like "X is good" or "X i…