“MIRI’s 2025 Fundraiser” by alexvermeer

“MIRI’s 2025 Fundraiser” by alexvermeer

Author: LessWrong December 3, 2025 Duration: 15:37
MIRI is running its first fundraiser in six years, targeting $6M. The first $1.6M raised will be matched 1:1 via an SFF grant. Fundraiser ends at midnight on Dec 31, 2025. Support our efforts to improve the conversation about superintelligence and help the world chart a viable path forward.

MIRI is a nonprofit with a goal of helping humanity make smart and sober decisions on the topic of smarter-than-human AI.

Our main focus from 2000 to ~2022 was on technical research to try to make it possible to build such AIs without catastrophic outcomes. More recently, we’ve pivoted to raising an alarm about how the race to superintelligent AI has put humanity on course for disaster.

In 2025, those efforts focused around Nate Soares and Eliezer Yudkowsky's book (now a New York Times bestseller) If Anyone Builds It, Everyone Dies, with many public appearances by the authors; many conversations with policymakers; the release of an expansive online supplement to the book; and various technical governance publications, including a recent report with a draft of an international agreement of the kind that could actually address the danger of superintelligence.

Millions have now viewed interviews and appearances with Eliezer and/or Nate [...]

---

Outline:

(02:18) The Big Picture

(03:39) Activities

(03:42) Communications

(07:55) Governance

(12:31) Fundraising

The original text contained 4 footnotes which were omitted from this narration.

---

First published:
December 1st, 2025

Source:
https://www.lesswrong.com/posts/z4jtxKw8xSHRqQbqw/miri-s-2025-fundraiser

---



Narrated by TYPE III AUDIO.


Dive into a stream of ideas where technology, culture, philosophy, and society intersect, all through the lens of the LessWrong (Curated & Popular) podcast. This isn't a traditional talk show with hosts, but rather a curated audio library of the most impactful writing from the LessWrong community. Each episode is a narration of a full post, selected for its high value and interesting arguments, focusing on pieces that have been formally curated or have garnered significant community approval. You'll hear clear, thoughtful readings of essays that tackle complex topics like artificial intelligence, rational thinking, moral philosophy, and the forces shaping our future. The audio format lets you absorb these dense, often paradigm-shifting concepts during a commute or a walk, turning written analysis into an immersive listening experience. This particular feed is deliberately selective, offering a manageable stream of the community's standout work. For those who want an even deeper dive into the discussion, there are broader feeds available. The LessWrong (Curated & Popular) podcast serves as an intellectual filter, delivering the signal through the noise and inviting you to engage with some of the most rigorously examined ideas on the internet.
Author: Language: English Episodes: 100

LessWrong (Curated & Popular)
Podcast Episodes
"Maybe there’s a pattern here?" by dynomight [not-audio_url] [/not-audio_url]

Duration: 15:23
1. It occurred to me that if I could invent a machine—a gun—which could by its rapidity of fire, enable one man to do as much battle duty as a hundred, that it would, to a large extent supersede the necessity of large ar…
"Persona Parasitology" by Raymond Douglas [not-audio_url] [/not-audio_url]

Duration: 22:22
There was a lot of chatter a few months back about "Spiral Personas" — AI personas that spread between users and models through seeds, spores, and behavioral manipulation. Adele Lopez's definitive post on the phenomenon…
"Here’s to the Polypropylene Makers" by jefftk [not-audio_url] [/not-audio_url]

Duration: 4:12
Six years ago, as covid-19 was rapidly spreading through the US, mysister was working as a medical resident. One day she was handed anN95 and told to "guard it with her life", because there weren'tany more coming. N95s a…
"The persona selection model" by Sam Marks [not-audio_url] [/not-audio_url]

Duration: 1:34:24
TL;DR We describe the persona selection model (PSM): the idea that LLMs learn to simulate diverse characters during pre-training, and post-training elicits and refines a particular such Assistant persona. Interactions wi…