Ellie Pavlick: The AI Paradigm Shift

Ellie Pavlick: The AI Paradigm Shift

Author: Helen and Dave Edwards February 5, 2026 Duration: 55:49

In this conversation, we explore the foundations of artificial intelligence with Ellie Pavlick, Assistant Professor of Computer Science at Brown University, a Research Scientist at Google Deepmind, and Director of ARIA, an NSF-funded institute examining AI's role in mental health support. Ellie's trajectory—from undergraduate degrees in economics and saxophone performance to pioneering research at the intersection of AI and cognitive science—reflects the kind of interdisciplinary thinking increasingly essential for understanding what these systems are and what they mean for us.

Ellie represents a generation of researchers grappling with what she calls a "paradigm shift" in how we understand both artificial and human intelligence. Her work challenges long-held assumptions in cognitive science while refusing to accept easy answers about what AI systems can or cannot do. As she observes, we're witnessing concepts like "intelligence," "meaning," and "understanding" undergo the kind of radical redefinition that historically accompanies major scientific revolutions—where old terms become relics of earlier theories or get repurposed to mean something fundamentally different.

Key themes we explore:

- The Grounding Question: How Ellie's thinking evolved from believing AI fundamentally lacked meaning without embodied sensory experience to recognizing that grounding itself is a more complex and empirically testable question than either side of the debate typically acknowledges

- Symbols Without Symbolism: Her recent collaborative work with Tom Griffiths, Brenden Lake, and others demonstrating that large language models exhibit capabilities previously thought to require explicit symbolic architectures—challenging decades of cognitive science orthodoxy about human cognition

- The Measurability Problem: Why AI's apparent success on standardized tests reveals more about the inadequacy of our metrics than the adequacy of the systems, and how education, hiring, and relationships have always resisted quantification in ways we conveniently forget when evaluating AI

- Intelligence as Moving Target: Ellie's argument that "intelligence" functions as a placeholder term for "the thing we don't yet understand"—always retreating as scientific progress advances, much like obsolete scientific concepts such as ether

- The Value Frontier: Why the aspects of human experience that resist quantification may be definitionally human—not because they're inherently unmeasurable, but because they represent whatever currently sits beyond our measurement capabilities

- Mental Health as Hard Problem: Why her new institute focuses on arguably the most challenging application domain for AI, where getting memory, co-adaptation, transparency, and long-term human impact right isn't optional but essential

Ellie consistently pushes back against premature conclusions—whether it's claims that AI definitively lacks meaning or assertions that passing standardized tests proves human-level capability. Her approach emphasizes asking "are these processes similar or different?" rather than making sweeping judgments about whether systems "really" understand or "truly" have intelligence. As Ellie notes, we're at the "tip of the iceberg" in understanding these systems—we haven't yet pushed them to their breaking point or discovered their full potential.

Her work on ARIA demonstrates this philosophy in practice. Rather than avoiding mental health applications because they're ethically fraught, she's leaning into the difficulty precisely because it forces confrontation with all the hard questions—from how memory works to how repeated human-AI interaction fundamentally changes both parties over time. It's research that refuses to wait a generation to see if we've "screwed up a whole generation."


Hosted by Helen and Dave Edwards, Stay Human, from the Artificiality Institute is a conversation that lives in the messy, human space between our tools and our selves. Each episode digs into the subtle ways artificial intelligence is reshaping our daily decisions, our creative impulses, and even our sense of identity. This isn't a technical manual or a series of futuristic predictions; it's a grounded exploration of how we maintain our agency in a world increasingly mediated by algorithms. The podcast operates from a core belief: that our engagement with AI should be about more than just safety or efficiency-it needs to be meaningful and worthwhile. You'll hear discussions rooted in story-based research, where complex ideas about cognition and ethics are unpacked through relatable narratives and real-world examples. The goal is to provide a framework for thoughtful choice, helping each of us consciously design the relationship we want with the machines in our lives. Tuning in offers a chance to step back from the hype and consider how we can actively remain the authors of our own minds, preserving what makes us uniquely human even as the technology evolves. It's an essential listen for anyone curious about the personal and philosophical dimensions of our digital age.
Author: Language: en-us Episodes: 100

Stay Human, from the Artificiality Institute
Podcast Episodes
Don Norman: Design for a Better World [not-audio_url] [/not-audio_url]

Duration: 1:03:51
What role does design have in solving the world’s biggest problems? What can designers add? Some would say that designers played a role in getting us into our current mess. Can they also get us out of it? How can we desi…
Jamer Hunt: Not to Scale [not-audio_url] [/not-audio_url]

Duration: 1:08:32
What are the cause and effect of my actions? How do I know the effect of the small acts in my life? How can I identify opportunities to have impact that is much larger than myself? How can we make problems that seem over…
David Krakauer: Complexity [not-audio_url] [/not-audio_url]

Duration: 1:34:34
We’re always looking for new ideas from science that we can use in our work. Over the past few years, we have been researching new ways to handle increasing complexity in the world and how to solve complex problems. Why…
Generative AI: ChatGPT, DALL-E, Stable Diffusion, and the rest [not-audio_url] [/not-audio_url]

Duration: 29:43
Everyone’s talking about it so we will too. Generative AI is taking the world by storm. But is it a good storm or a scary storm? How should individuals think about what’s possible? What about companies? Our take: generat…
Kees Dorst: Frame Innovation [not-audio_url] [/not-audio_url]

Duration: 1:02:50
What can we learn from the practice of design? What might we learn if we had an insight into top designers’ minds? How might we apply the best practices of designers beyond the field of design itself? Most of our listene…
No-duhs and some surprises [not-audio_url] [/not-audio_url]

Duration: 26:37
The latest Big Ideas report from MIT Sloan and BCG makes for an interesting read but contains flaws, obvious conclusions, and raises more questions than it answers.We discuss this report and make some suggestions about h…
Elon's error calculation at Twitter [not-audio_url] [/not-audio_url]

Duration: 27:29
Twitter as we knew is gone. Elon has fired half the full time employees and 80 percent of the contractors. It’s a brutal way to trim excess fat, reset the culture, and establish a loyal band. But is it a good decision? H…
Marina Nitze and Nick Sinai: Hack Your Bureaucracy [not-audio_url] [/not-audio_url]

Duration: 56:50
We all likely want to improve the organizations we work in. We might want to improve the employee experience, improve the customer experience, or be more efficient and effective. But we all likely have had the experience…
Tom Davenport and Steve Miller: Working with AI [not-audio_url] [/not-audio_url]

Duration: 52:38
How will AI change our jobs? Will it replace humans and eliminate jobs? Will it help humans get things done? Will it create new opportunities for new jobs? People often speculate on these topics, doing their best to pred…