Ellie Pavlick: The AI Paradigm Shift

Ellie Pavlick: The AI Paradigm Shift

Author: Helen and Dave Edwards February 5, 2026 Duration: 55:49

In this conversation, we explore the foundations of artificial intelligence with Ellie Pavlick, Assistant Professor of Computer Science at Brown University, a Research Scientist at Google Deepmind, and Director of ARIA, an NSF-funded institute examining AI's role in mental health support. Ellie's trajectory—from undergraduate degrees in economics and saxophone performance to pioneering research at the intersection of AI and cognitive science—reflects the kind of interdisciplinary thinking increasingly essential for understanding what these systems are and what they mean for us.

Ellie represents a generation of researchers grappling with what she calls a "paradigm shift" in how we understand both artificial and human intelligence. Her work challenges long-held assumptions in cognitive science while refusing to accept easy answers about what AI systems can or cannot do. As she observes, we're witnessing concepts like "intelligence," "meaning," and "understanding" undergo the kind of radical redefinition that historically accompanies major scientific revolutions—where old terms become relics of earlier theories or get repurposed to mean something fundamentally different.

Key themes we explore:

- The Grounding Question: How Ellie's thinking evolved from believing AI fundamentally lacked meaning without embodied sensory experience to recognizing that grounding itself is a more complex and empirically testable question than either side of the debate typically acknowledges

- Symbols Without Symbolism: Her recent collaborative work with Tom Griffiths, Brenden Lake, and others demonstrating that large language models exhibit capabilities previously thought to require explicit symbolic architectures—challenging decades of cognitive science orthodoxy about human cognition

- The Measurability Problem: Why AI's apparent success on standardized tests reveals more about the inadequacy of our metrics than the adequacy of the systems, and how education, hiring, and relationships have always resisted quantification in ways we conveniently forget when evaluating AI

- Intelligence as Moving Target: Ellie's argument that "intelligence" functions as a placeholder term for "the thing we don't yet understand"—always retreating as scientific progress advances, much like obsolete scientific concepts such as ether

- The Value Frontier: Why the aspects of human experience that resist quantification may be definitionally human—not because they're inherently unmeasurable, but because they represent whatever currently sits beyond our measurement capabilities

- Mental Health as Hard Problem: Why her new institute focuses on arguably the most challenging application domain for AI, where getting memory, co-adaptation, transparency, and long-term human impact right isn't optional but essential

Ellie consistently pushes back against premature conclusions—whether it's claims that AI definitively lacks meaning or assertions that passing standardized tests proves human-level capability. Her approach emphasizes asking "are these processes similar or different?" rather than making sweeping judgments about whether systems "really" understand or "truly" have intelligence. As Ellie notes, we're at the "tip of the iceberg" in understanding these systems—we haven't yet pushed them to their breaking point or discovered their full potential.

Her work on ARIA demonstrates this philosophy in practice. Rather than avoiding mental health applications because they're ethically fraught, she's leaning into the difficulty precisely because it forces confrontation with all the hard questions—from how memory works to how repeated human-AI interaction fundamentally changes both parties over time. It's research that refuses to wait a generation to see if we've "screwed up a whole generation."


Hosted by Helen and Dave Edwards, Stay Human, from the Artificiality Institute is a conversation that lives in the messy, human space between our tools and our selves. Each episode digs into the subtle ways artificial intelligence is reshaping our daily decisions, our creative impulses, and even our sense of identity. This isn't a technical manual or a series of futuristic predictions; it's a grounded exploration of how we maintain our agency in a world increasingly mediated by algorithms. The podcast operates from a core belief: that our engagement with AI should be about more than just safety or efficiency-it needs to be meaningful and worthwhile. You'll hear discussions rooted in story-based research, where complex ideas about cognition and ethics are unpacked through relatable narratives and real-world examples. The goal is to provide a framework for thoughtful choice, helping each of us consciously design the relationship we want with the machines in our lives. Tuning in offers a chance to step back from the hype and consider how we can actively remain the authors of our own minds, preserving what makes us uniquely human even as the technology evolves. It's an essential listen for anyone curious about the personal and philosophical dimensions of our digital age.
Author: Language: en-us Episodes: 100

Stay Human, from the Artificiality Institute
Podcast Episodes
Values & Generative AI [not-audio_url] [/not-audio_url]

Duration: 25:05
As Silicon Valley lunges towards creating AI that is considered superior to humans (at times called Artificial General Intelligence or Super-intelligent AI), it does so with the premise that it is possible to encode valu…
Culture & Generative AI [not-audio_url] [/not-audio_url]

Duration: 32:29
Culture plays a vital role in connecting individuals and communities, enabling us to leverage our unique talents, share knowledge, and solve problems together. However, the rise of an intelligentsia of machine soothsayer…
Mind for our Minds: Introduction [not-audio_url] [/not-audio_url]

Duration: 26:28
This episode is the first in our summer series based on our thesis for designing AI to be a Mind for our Minds. We recently presented this idea for the first time at our favorite event of the year hosted by The House of…
C. Thi Nguyen: Metrification [not-audio_url] [/not-audio_url]

Duration: 1:07:38
AI is based on data. And data is frequently collected with the intent to be quantified, understood, and used across context. That’s why we have things like grade point averages that translate across subject matters and e…
Harpreet Sareen: Cyborg Botany [not-audio_url] [/not-audio_url]

Duration: 48:44
We are deeply interested in the intersection of the digital and material worlds, both living and not living. Most of our interviews are focused on the intersection of humans and machines—how does the digital world affect…
Arvind Jain: Glean, Enterprise Search, and Generative AI [not-audio_url] [/not-audio_url]

Duration: 48:03
Anyone working in a large organization has likely asked this question: Why is it that I can seemingly find anything on the internet but I can’t seem to find anything inside my organization? It is counter-intuitive that i…
Lukas Egger: Generative AI, a view from SAP [not-audio_url] [/not-audio_url]

Duration: 46:35
The world has been upended by the introduction of generative AI. We think this could be the largest advance in technology—ever. All of our clients are trying to figure out what to do, how to de-risk the introduction of t…
Katie Davis: Technology's Child [not-audio_url] [/not-audio_url]

Duration: 52:32
Is technology good or bad for children? How should parents think about technology in their children’s lives? Are there different answers depending on the age of the child and their stage of development? What can we apply…
Andrew Blum: The Weather Machine [not-audio_url] [/not-audio_url]

Duration: 52:58
Weather forecasting is fascinating. It involves making predictions in the complex, natural world, using a global infrastructure for people who have varying needs and desires. Some just want to know if we should carry an…
Juan Noguera: Generative AI in Industrial Design [not-audio_url] [/not-audio_url]

Duration: 39:26
We’ve heard a lot about how generative AI may negatively impact careers in design. But we wonder how might generative AI have a positive impact on designers? How might generative AI be used as a tool that helps designers…