“Stop Applying And Get To Work” by plex

“Stop Applying And Get To Work” by plex

Author: LessWrong November 25, 2025 Duration: 2:52
TL;DR: Figure out what needs doing and do it, don't wait on approval from fellowships or jobs.

If you...

  • Have short timelines
  • Have been struggling to get into a position in AI safety
  • Are able to self-motivate your efforts
  • Have a sufficient financial safety net
... I would recommend changing your personal strategy entirely.

I started my full-time AI safety career transitioning process in March 2025. For the first 7 months or so, I heavily prioritized applying for jobs and fellowships. But like for many others trying to "break into the field" and get their "foot in the door", this became quite discouraging.

I'm not gonna get into the numbers here, but if you've been applying and getting rejected multiple times during the past year or so, you've probably noticed the number of applicants increasing at a preposterous rate. What this means in practice is that the "entry-level" positions are practically impossible for "entry-level" people to enter.

If you're like me and have short timelines, applying, getting better at applying, and applying again, becomes meaningless very fast. You're optimizing for signaling competence rather than actually being competent. Because if you a) have short timelines, and b) are [...]

The original text contained 3 footnotes which were omitted from this narration.

---

First published:
November 23rd, 2025

Source:
https://www.lesswrong.com/posts/ey2kjkgvnxK3Bhman/stop-applying-and-get-to-work

---



Narrated by TYPE III AUDIO.


Dive into a stream of ideas where technology, culture, philosophy, and society intersect, all through the lens of the LessWrong (Curated & Popular) podcast. This isn't a traditional talk show with hosts, but rather a curated audio library of the most impactful writing from the LessWrong community. Each episode is a narration of a full post, selected for its high value and interesting arguments, focusing on pieces that have been formally curated or have garnered significant community approval. You'll hear clear, thoughtful readings of essays that tackle complex topics like artificial intelligence, rational thinking, moral philosophy, and the forces shaping our future. The audio format lets you absorb these dense, often paradigm-shifting concepts during a commute or a walk, turning written analysis into an immersive listening experience. This particular feed is deliberately selective, offering a manageable stream of the community's standout work. For those who want an even deeper dive into the discussion, there are broader feeds available. The LessWrong (Curated & Popular) podcast serves as an intellectual filter, delivering the signal through the noise and inviting you to engage with some of the most rigorously examined ideas on the internet.
Author: Language: English Episodes: 100

LessWrong (Curated & Popular)
Podcast Episodes
“Little Echo” by Zvi [not-audio_url] [/not-audio_url]

Duration: 4:08
I believe that we will win. An echo of an old ad for the 2014 US men's World Cup team. It did not win. I was in Berkeley for the 2025 Secular Solstice. We gather to sing and to reflect. The night's theme was the opposite…
“A Pragmatic Vision for Interpretability” by Neel Nanda [not-audio_url] [/not-audio_url]

Duration: 1:03:58
Executive Summary The Google DeepMind mechanistic interpretability team has made a strategic pivot over the past year, from ambitious reverse-engineering to a focus on pragmatic interpretability: Trying to directly solve…
“AI in 2025: gestalt” by technicalities [not-audio_url] [/not-audio_url]

Duration: 41:59
This is the editorial for this year's "Shallow Review of AI Safety". (It got long enough to stand alone.) Epistemic status: subjective impressions plus one new graph plus 300 links. Huge thanks to Jaeho Lee, Jaime Sevill…
“Eliezer’s Unteachable Methods of Sanity” by Eliezer Yudkowsky [not-audio_url] [/not-audio_url]

Duration: 16:13
"How are you coping with the end of the world?" journalists sometimes ask me, and the true answer is something they have no hope of understanding and I have no hope of explaining in 30 seconds, so I usually answer someth…
“An Ambitious Vision for Interpretability” by leogao [not-audio_url] [/not-audio_url]

Duration: 8:49
The goal of ambitious mechanistic interpretability (AMI) is to fully understand how neural networks work. While some have pivoted towards more pragmatic approaches, I think the reports of AMI's death have been greatly ex…
“MIRI’s 2025 Fundraiser” by alexvermeer [not-audio_url] [/not-audio_url]

Duration: 15:37
MIRI is running its first fundraiser in six years, targeting $6M. The first $1.6M raised will be matched 1:1 via an SFF grant. Fundraiser ends at midnight on Dec 31, 2025. Support our efforts to improve the conversation…