David Wolpert: The Thermodynamics of Meaning

David Wolpert: The Thermodynamics of Meaning

Author: Helen and Dave Edwards April 6, 2025 Duration: 1:16:19

In this episode, we welcome David Wolpert, a Professor at the Santa Fe Institute renowned for his groundbreaking work across multiple disciplines—from physics and computer science to game theory and complexity.

* Note: If you enjoy our podcast conversations, please join us for the Artificiality Summit on October 23-25 in Bend, Oregon for many more in person conversations like these! Learn more about the Summit at www.artificiality.world/summit.

We reached out to David to explore the mathematics of meaning—a concept that's becoming crucial as we live more deeply with artificial intelligences. If machines can hold their own mathematical understanding of meaning, how does that reshape our interactions, our shared reality, and even what it means to be human?

David takes us on a journey through his paper "Semantic Information, Autonomous Agency and Non-Equilibrium Statistical Physics," co-authored with Artemy Kolchinsky. While mathematically rigorous in its foundation, our conversation explores these complex ideas in accessible terms.

At the core of our discussion is a novel framework for understanding meaning itself—not just as a philosophical concept, but as something that can be mathematically formalized. David explains how we can move beyond Claude Shannon's syntactic information theory (which focuses on the transmission of bits) to a deeper understanding of semantic information (what those bits actually mean to an agent).

Drawing from Judea Pearl's work on causality, Schrödinger's insights on life, and stochastic thermodynamics, David presents a unified framework where meaning emerges naturally from an agent's drive to persist into the future. This approach provides a mathematical basis for understanding what makes certain information meaningful to living systems—from humans to single cells.

Our conversation ventures into:

  • How AI might help us understand meaning in ways we cannot perceive ourselves
  • What a mathematically rigorous definition of meaning could mean for AI alignment
  • How contexts shape our understanding of what's meaningful
  • The distinction between causal information and mere correlation

We finish by talking about David's current work on a potentially concerning horizon: how distributed AI systems interacting through smart contracts could create scenarios beyond our mathematical ability to predict—a "distributed singularity" that might emerge in as little as five years. We wrote about this work here.

For anyone interested in artificial intelligence, complexity science, or the fundamental nature of meaning itself, this conversation offers rich insights from one of today's most innovative interdisciplinary thinkers.

About David Wolpert:
David Wolpert is a Professor at the Santa Fe Institute and one of the modern era's true polymaths. He received his PhD in physics from UC Santa Barbara but has made seminal contributions across numerous fields. His research spans machine learning (where he formulated the "No Free Lunch" theorems), statistical physics, game theory, distributed intelligence, and the foundations of inference and computation. Before joining SFI, Wolpert held positions at NASA, Stanford, and the Santa Fe Institute as a professor. His work consistently bridges disciplinary boundaries to address fundamental questions about complex systems, computation, and the nature of intelligence.

Thanks again to Jonathan Coulton for our music.


Hosted by Helen and Dave Edwards, Stay Human, from the Artificiality Institute is a conversation that lives in the messy, human space between our tools and our selves. Each episode digs into the subtle ways artificial intelligence is reshaping our daily decisions, our creative impulses, and even our sense of identity. This isn't a technical manual or a series of futuristic predictions; it's a grounded exploration of how we maintain our agency in a world increasingly mediated by algorithms. The podcast operates from a core belief: that our engagement with AI should be about more than just safety or efficiency-it needs to be meaningful and worthwhile. You'll hear discussions rooted in story-based research, where complex ideas about cognition and ethics are unpacked through relatable narratives and real-world examples. The goal is to provide a framework for thoughtful choice, helping each of us consciously design the relationship we want with the machines in our lives. Tuning in offers a chance to step back from the hype and consider how we can actively remain the authors of our own minds, preserving what makes us uniquely human even as the technology evolves. It's an essential listen for anyone curious about the personal and philosophical dimensions of our digital age.
Author: Language: en-us Episodes: 100

Stay Human, from the Artificiality Institute
Podcast Episodes
Don Norman: Design for a Better World [not-audio_url] [/not-audio_url]

Duration: 1:03:51
What role does design have in solving the world’s biggest problems? What can designers add? Some would say that designers played a role in getting us into our current mess. Can they also get us out of it? How can we desi…
Jamer Hunt: Not to Scale [not-audio_url] [/not-audio_url]

Duration: 1:08:32
What are the cause and effect of my actions? How do I know the effect of the small acts in my life? How can I identify opportunities to have impact that is much larger than myself? How can we make problems that seem over…
David Krakauer: Complexity [not-audio_url] [/not-audio_url]

Duration: 1:34:34
We’re always looking for new ideas from science that we can use in our work. Over the past few years, we have been researching new ways to handle increasing complexity in the world and how to solve complex problems. Why…
Generative AI: ChatGPT, DALL-E, Stable Diffusion, and the rest [not-audio_url] [/not-audio_url]

Duration: 29:43
Everyone’s talking about it so we will too. Generative AI is taking the world by storm. But is it a good storm or a scary storm? How should individuals think about what’s possible? What about companies? Our take: generat…
Kees Dorst: Frame Innovation [not-audio_url] [/not-audio_url]

Duration: 1:02:50
What can we learn from the practice of design? What might we learn if we had an insight into top designers’ minds? How might we apply the best practices of designers beyond the field of design itself? Most of our listene…
No-duhs and some surprises [not-audio_url] [/not-audio_url]

Duration: 26:37
The latest Big Ideas report from MIT Sloan and BCG makes for an interesting read but contains flaws, obvious conclusions, and raises more questions than it answers.We discuss this report and make some suggestions about h…
Elon's error calculation at Twitter [not-audio_url] [/not-audio_url]

Duration: 27:29
Twitter as we knew is gone. Elon has fired half the full time employees and 80 percent of the contractors. It’s a brutal way to trim excess fat, reset the culture, and establish a loyal band. But is it a good decision? H…
Marina Nitze and Nick Sinai: Hack Your Bureaucracy [not-audio_url] [/not-audio_url]

Duration: 56:50
We all likely want to improve the organizations we work in. We might want to improve the employee experience, improve the customer experience, or be more efficient and effective. But we all likely have had the experience…
Tom Davenport and Steve Miller: Working with AI [not-audio_url] [/not-audio_url]

Duration: 52:38
How will AI change our jobs? Will it replace humans and eliminate jobs? Will it help humans get things done? Will it create new opportunities for new jobs? People often speculate on these topics, doing their best to pred…