"The persona selection model" by Sam Marks

"The persona selection model" by Sam Marks

Author: LessWrong February 25, 2026 Duration: 1:34:24
TL;DR

We describe the persona selection model (PSM): the idea that LLMs learn to simulate diverse characters during pre-training, and post-training elicits and refines a particular such Assistant persona. Interactions with an AI assistant are then well-understood as being interactions with the Assistant—something roughly like a character in an LLM-generated story. We survey empirical behavioral, generalization, and interpretability-based evidence for PSM. PSM has consequences for AI development, such as recommending anthropomorphic reasoning about AI psychology and introduction of positive AI archetypes into training data. An important open question is how exhaustive PSM is, especially whether there might be sources of agency external to the Assistant persona, and how this might change in the future.

Introduction

What sort of thing is a modern AI assistant? One perspective holds that they are shallow, rigid systems that narrowly pattern-match user inputs to training data. Another perspective regards AI systems as alien creatures with learned goals, behaviors, and patterns of thought that are fundamentally inscrutable to us. A third option is to anthropomorphize AIs and regard them as something like a digital human. Developing good mental models for AI systems is important for predicting and controlling their behaviors. If our goal is to [...]

---

Outline:

(00:10) TL;DR

(01:02) Introduction

(06:18) The persona selection model

(07:09) Predictive models and personas

(09:54) From predictive models to AI assistants

(12:43) Statement of the persona selection model

(16:25) Empirical evidence for PSM

(16:58) Evidence from generalization

(22:48) Behavioral evidence

(28:42) Evidence from interpretability

(35:42) Complicating evidence

(42:21) Consequences for AI development

(42:45) AI assistants are human-like

(43:23) Anthropomorphic reasoning about AI assistants is productive

(49:17) AI welfare

(51:35) The importance of good AI role models

(53:49) Interpretability-based alignment auditing will be tractable

(56:43) How exhaustive is PSM?

(59:46) Shoggoths, actors, operating systems, and authors

(01:00:46) Degrees of non-persona LLM agency en-US-AvaMultilingualNeural__ Green leaf or plant with yellow smiley face character attached.

(01:06:52) Other sources of persona-like agency

(01:11:17) Why might we expect PSM to be exhaustive?

(01:12:21) Post-training as elicitation

(01:14:54) Personas provide a simple way to fit the post-training data

(01:17:55) How might these considerations change?

(01:20:01) Empirical observations

(01:27:07) Conclusion

(01:30:30) Acknowledgements

(01:31:15) Appendix A: Breaking character

(01:32:52) Appendix B: An example of non-persona deception

The original text contained 5 footnotes which were omitted from this narration.

---

First published:
February 23rd, 2026

Source:
https://www.lesswrong.com/posts/dfoty34sT7CSKeJNn/the-persona-selection-model

---



Narrated by TYPE III AUDIO.

---


Dive into a stream of ideas where technology, culture, philosophy, and society intersect, all through the lens of the LessWrong (Curated & Popular) podcast. This isn't a traditional talk show with hosts, but rather a curated audio library of the most impactful writing from the LessWrong community. Each episode is a narration of a full post, selected for its high value and interesting arguments, focusing on pieces that have been formally curated or have garnered significant community approval. You'll hear clear, thoughtful readings of essays that tackle complex topics like artificial intelligence, rational thinking, moral philosophy, and the forces shaping our future. The audio format lets you absorb these dense, often paradigm-shifting concepts during a commute or a walk, turning written analysis into an immersive listening experience. This particular feed is deliberately selective, offering a manageable stream of the community's standout work. For those who want an even deeper dive into the discussion, there are broader feeds available. The LessWrong (Curated & Popular) podcast serves as an intellectual filter, delivering the signal through the noise and inviting you to engage with some of the most rigorously examined ideas on the internet.
Author: Language: English Episodes: 100

LessWrong (Curated & Popular)
Podcast Episodes
"Why I Transitioned: A Response" by marisa [not-audio_url] [/not-audio_url]

Duration: 21:17
Fiora Sunshine's post, Why I Transitioned: A Case Study (the OP) articulates a valuable theory for why some MtFs transition. If you are MtF and feel the post describes you, I believe you. However, many statements from th…
"Claude’s new constitution" by Zac Hatfield-Dodds [not-audio_url] [/not-audio_url]

Duration: 11:56
Read the constitution. Previously: 'soul document' discussion here. We're publishing a new constitution for our AI model, Claude. It's a detailed description of Anthropic's vision for Claude's values and behavior; a holi…
"What Washington Says About AGI" by zroe1 [not-audio_url] [/not-audio_url]

Duration: 14:16
I spent a few hundred dollars on Anthropic API credits and let Claude individually research every current US congressperson's position on AI. This is a summary of my findings. Disclaimer: Summarizing people's beliefs is…
"How AI Is Learning to Think in Secret" by Nicholas Andresen [not-audio_url] [/not-audio_url]

Duration: 37:44
On Thinkish, Neuralese, and the End of Readable Reasoning In September 2025, researchers published the internal monologue of OpenAI's GPT-o3 as it decided to lie about scientific data. This is what it thought: Pardon? Th…
"On Owning Galaxies" by Simon Lermen [not-audio_url] [/not-audio_url]

Duration: 5:37
It seems to be a real view held by serious people that your OpenAI shares will soon be tradable for moons and galaxies. This includes eminent thinkers like Dwarkesh Patel, Leopold Aschenbrenner, perhaps Scott Alexander a…