The Evolution of Reasoning in Small Language Models with Yejin Choi - #761

The Evolution of Reasoning in Small Language Models with Yejin Choi - #761

Author: Sam Charrington January 30, 2026 Duration: 1:06:21
Today, we're joined by Yejin Choi, professor and senior fellow at Stanford University in the Computer Science Department and the Institute for Human-Centered AI (HAI). In this conversation, we explore Yejin’s recent work on making small language models reason more effectively. We discuss how high-quality, diverse data plays a central role in closing the intelligence gap between small and large models, and how combining synthetic data generation, imitation learning, and reinforcement learning can unlock stronger reasoning capabilities in smaller models. Yejin explains the risks of homogeneity in model outputs and mode collapse highlighted in her “Artificial Hivemind” paper, and its impacts on human creativity and knowledge. We also discuss her team's novel approaches, including reinforcement learning as a pre-training objective, where models are incentivized to “think” before predicting the next token, and "Prismatic Synthesis," a gradient-based method for generating diverse synthetic math data while filtering overrepresented examples. Additionally, we cover the societal implications of AI and the concept of pluralistic alignment—ensuring AI reflects the diverse norms and values of humanity. Finally, Yejin shares her mission to democratize AI beyond large organizations and offers her predictions for the coming year. The complete show notes for this episode can be found at https://twimlai.com/go/761.

Hosted by industry analyst and commentator Sam Charrington, The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) serves as a vital conduit between cutting-edge research and its real-world implications. This isn't just a series of technical lectures; it's a series of conversations that unpack how AI and machine learning are actively reshaping industries and societal structures. Each episode connects you directly with leading researchers, engineers, and innovative thinkers who are defining the frontiers of the field. The discussions go beyond abstract theory to explore the practical challenges, ethical considerations, and business transformations driven by these technologies. Whether you're a data scientist deep in the code, a tech-savvy leader strategizing implementation, or simply fascinated by the future of intelligent systems, this podcast provides the context and depth needed to stay informed. By focusing on the people behind the algorithms and the ideas powering the platforms, Sam creates a resource that is both intellectually substantive and genuinely engaging, building a thoughtful community around one of the most significant technological shifts of our time.
Author: Language: English Episodes: 100

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Podcast Episodes
Are Emergent Behaviors in LLMs an Illusion? with Sanmi Koyejo - #671 [not-audio_url] [/not-audio_url]

Duration: 1:05:40
Today we’re joined by Sanmi Koyejo, assistant professor at Stanford University, to continue our NeurIPS 2024 series. In our conversation, Sanmi discusses his two recent award-winning papers. First, we dive into his paper…
Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao - #668 [not-audio_url] [/not-audio_url]

Duration: 39:45
Today we’re joined by Ben Zhao, a Neubauer professor of computer science at the University of Chicago. In our conversation, we explore his research at the intersection of security and generative AI. We focus on Ben’s rec…
Learning Transformer Programs with Dan Friedman - #667 [not-audio_url] [/not-audio_url]

Duration: 38:48
Today, we continue our NeurIPS series with Dan Friedman, a PhD student in the Princeton NLP group. In our conversation, we explore his research on mechanistic interpretability for transformer models, specifically his pap…

«1...678910