Ellie Pavlick: The AI Paradigm Shift

Ellie Pavlick: The AI Paradigm Shift

Author: Helen and Dave Edwards February 5, 2026 Duration: 55:49

In this conversation, we explore the foundations of artificial intelligence with Ellie Pavlick, Assistant Professor of Computer Science at Brown University, a Research Scientist at Google Deepmind, and Director of ARIA, an NSF-funded institute examining AI's role in mental health support. Ellie's trajectory—from undergraduate degrees in economics and saxophone performance to pioneering research at the intersection of AI and cognitive science—reflects the kind of interdisciplinary thinking increasingly essential for understanding what these systems are and what they mean for us.

Ellie represents a generation of researchers grappling with what she calls a "paradigm shift" in how we understand both artificial and human intelligence. Her work challenges long-held assumptions in cognitive science while refusing to accept easy answers about what AI systems can or cannot do. As she observes, we're witnessing concepts like "intelligence," "meaning," and "understanding" undergo the kind of radical redefinition that historically accompanies major scientific revolutions—where old terms become relics of earlier theories or get repurposed to mean something fundamentally different.

Key themes we explore:

- The Grounding Question: How Ellie's thinking evolved from believing AI fundamentally lacked meaning without embodied sensory experience to recognizing that grounding itself is a more complex and empirically testable question than either side of the debate typically acknowledges

- Symbols Without Symbolism: Her recent collaborative work with Tom Griffiths, Brenden Lake, and others demonstrating that large language models exhibit capabilities previously thought to require explicit symbolic architectures—challenging decades of cognitive science orthodoxy about human cognition

- The Measurability Problem: Why AI's apparent success on standardized tests reveals more about the inadequacy of our metrics than the adequacy of the systems, and how education, hiring, and relationships have always resisted quantification in ways we conveniently forget when evaluating AI

- Intelligence as Moving Target: Ellie's argument that "intelligence" functions as a placeholder term for "the thing we don't yet understand"—always retreating as scientific progress advances, much like obsolete scientific concepts such as ether

- The Value Frontier: Why the aspects of human experience that resist quantification may be definitionally human—not because they're inherently unmeasurable, but because they represent whatever currently sits beyond our measurement capabilities

- Mental Health as Hard Problem: Why her new institute focuses on arguably the most challenging application domain for AI, where getting memory, co-adaptation, transparency, and long-term human impact right isn't optional but essential

Ellie consistently pushes back against premature conclusions—whether it's claims that AI definitively lacks meaning or assertions that passing standardized tests proves human-level capability. Her approach emphasizes asking "are these processes similar or different?" rather than making sweeping judgments about whether systems "really" understand or "truly" have intelligence. As Ellie notes, we're at the "tip of the iceberg" in understanding these systems—we haven't yet pushed them to their breaking point or discovered their full potential.

Her work on ARIA demonstrates this philosophy in practice. Rather than avoiding mental health applications because they're ethically fraught, she's leaning into the difficulty precisely because it forces confrontation with all the hard questions—from how memory works to how repeated human-AI interaction fundamentally changes both parties over time. It's research that refuses to wait a generation to see if we've "screwed up a whole generation."


Hosted by Helen and Dave Edwards, Stay Human, from the Artificiality Institute is a conversation that lives in the messy, human space between our tools and our selves. Each episode digs into the subtle ways artificial intelligence is reshaping our daily decisions, our creative impulses, and even our sense of identity. This isn't a technical manual or a series of futuristic predictions; it's a grounded exploration of how we maintain our agency in a world increasingly mediated by algorithms. The podcast operates from a core belief: that our engagement with AI should be about more than just safety or efficiency-it needs to be meaningful and worthwhile. You'll hear discussions rooted in story-based research, where complex ideas about cognition and ethics are unpacked through relatable narratives and real-world examples. The goal is to provide a framework for thoughtful choice, helping each of us consciously design the relationship we want with the machines in our lives. Tuning in offers a chance to step back from the hype and consider how we can actively remain the authors of our own minds, preserving what makes us uniquely human even as the technology evolves. It's an essential listen for anyone curious about the personal and philosophical dimensions of our digital age.
Author: Language: en-us Episodes: 100

Stay Human, from the Artificiality Institute
Podcast Episodes
Megan Brown: Data Literacy [not-audio_url] [/not-audio_url]

Duration: 59:38
All major companies are working to increase the value of data science. Setting a goal may be easy but implementation often raises challenging questions. How should companies think about the role of data scientists, the c…
Peter Sterling: Decision Evolution [not-audio_url] [/not-audio_url]

Duration: 1:13:41
This week we talk with Peter Sterling, the author of What is Health. Peter has had a long career in medicine and neuroscience. He has recently published in Jama Psychiatry, with Michael Platt, on Why Deaths of Despair Ar…
Stephen Fleming: Metacognition [not-audio_url] [/not-audio_url]

Duration: 1:01:40
It’s human to know oneself. We are able to self-monitor, understand our cognition, and recognize gaps in our knowledge. This is called metacognition—we think about how we think. We can think of it as self-awareness or th…
Jevin West: Making Sense of Data [not-audio_url] [/not-audio_url]

Duration: 51:57
Have you ever wondered what it means to be data literate in a world of big data and AI? Now that so many decisions rely on information that is only readable by machine and our statistical intuitions, which were bad befor…
Michael Bungay Stanier: Staying Curious [not-audio_url] [/not-audio_url]

Duration: 43:25
Have you wondered what makes people different from machines? Well one thing is curiosity—curiosity is something that drives humans but as yet not machines. And one person that knows humans and curiosity is Michael Bungay…
Mollie Pettit: Visualizing Data [not-audio_url] [/not-audio_url]

Duration: 41:01
Making decisions with data requires some form of communication with data. But how do we communicate with numbers and characters and binary bits? The best way today is through data visualization. Visualizing data has come…
Josh Lovejoy: Designing AI [not-audio_url] [/not-audio_url]

Duration: 1:27:39
Have you ever wondered about what it takes to design AI that doesn’t do more harm than good? We speak with Josh Lovejoy who is perhaps the most experienced out there in the field of human-centered AI design. At the time…
Kate O'Neill: Humanizing Tech [not-audio_url] [/not-audio_url]

Duration: 43:52
Have you ever wondered what it means to be a humanist in the age of technology? How can we put human values into a machine? How can we even know what those human values are? We asked Kate O’Neill, founder of KO Insights…
Tania Lombrozo: Intuition and data [not-audio_url] [/not-audio_url]

Duration: 49:20
Have you ever wondered why we humans love to use our intuition even when we are surrounded by data and we also know that even simple algorithms can be more accurate than human judgment? We put that exact question to Tani…

«1...678910