Steve Sloman: Information to Bits at the Artificiality Summit 2024

Steve Sloman: Information to Bits at the Artificiality Summit 2024

Author: Helen and Dave Edwards August 3, 2025 Duration: 34:59

At the Artificiality Summit in October 2024, Steve Sloman, professor at Brown University and author of The Knowledge Illusion and The Cost of Conviction, catalyzed a conversation about how we perceive knowledge in ourselves, others, and now in machines. What happens when our collective knowledge includes a community of machines? Steve challenged us to think about the dynamics of knowledge and understanding in an AI-driven world, about the evolving landscape of narratives, and ask the question can AI make us believe in ways that humans make us believe? What would it take for AI to construct a compelling ideology and belief system that humans would want to follow?


Bio: Steven Sloman has taught at Brown since 1992. He studies higher-level cognition. He is a Fellow of the Cognitive Science Society, the Society of Experimental Psychologists, the American Psychological Society, the Eastern Psychological Association, and the Psychonomic Society. Along with scientific papers and editorials, his published work includes a 2005 book Causal Models: How We Think about the World and Its Alternatives, a 2017 book The Knowledge Illusion: Why We Never Think Alone co-authored with Phil Fernbach, and the forthcoming Righteousness: How Humans Decide from MIT Press. He has been Editor-in-Chief of the journal Cognition, Chair of the Brown University faculty, and created Brown’s concentration in Behavioral Decision Sciences.


Hosted by Helen and Dave Edwards, Stay Human, from the Artificiality Institute is a conversation that lives in the messy, human space between our tools and our selves. Each episode digs into the subtle ways artificial intelligence is reshaping our daily decisions, our creative impulses, and even our sense of identity. This isn't a technical manual or a series of futuristic predictions; it's a grounded exploration of how we maintain our agency in a world increasingly mediated by algorithms. The podcast operates from a core belief: that our engagement with AI should be about more than just safety or efficiency-it needs to be meaningful and worthwhile. You'll hear discussions rooted in story-based research, where complex ideas about cognition and ethics are unpacked through relatable narratives and real-world examples. The goal is to provide a framework for thoughtful choice, helping each of us consciously design the relationship we want with the machines in our lives. Tuning in offers a chance to step back from the hype and consider how we can actively remain the authors of our own minds, preserving what makes us uniquely human even as the technology evolves. It's an essential listen for anyone curious about the personal and philosophical dimensions of our digital age.
Author: Language: en-us Episodes: 100

Stay Human, from the Artificiality Institute
Podcast Episodes
Don Norman: Design for a Better World [not-audio_url] [/not-audio_url]

Duration: 1:03:51
What role does design have in solving the world’s biggest problems? What can designers add? Some would say that designers played a role in getting us into our current mess. Can they also get us out of it? How can we desi…
Jamer Hunt: Not to Scale [not-audio_url] [/not-audio_url]

Duration: 1:08:32
What are the cause and effect of my actions? How do I know the effect of the small acts in my life? How can I identify opportunities to have impact that is much larger than myself? How can we make problems that seem over…
David Krakauer: Complexity [not-audio_url] [/not-audio_url]

Duration: 1:34:34
We’re always looking for new ideas from science that we can use in our work. Over the past few years, we have been researching new ways to handle increasing complexity in the world and how to solve complex problems. Why…
Generative AI: ChatGPT, DALL-E, Stable Diffusion, and the rest [not-audio_url] [/not-audio_url]

Duration: 29:43
Everyone’s talking about it so we will too. Generative AI is taking the world by storm. But is it a good storm or a scary storm? How should individuals think about what’s possible? What about companies? Our take: generat…
Kees Dorst: Frame Innovation [not-audio_url] [/not-audio_url]

Duration: 1:02:50
What can we learn from the practice of design? What might we learn if we had an insight into top designers’ minds? How might we apply the best practices of designers beyond the field of design itself? Most of our listene…
No-duhs and some surprises [not-audio_url] [/not-audio_url]

Duration: 26:37
The latest Big Ideas report from MIT Sloan and BCG makes for an interesting read but contains flaws, obvious conclusions, and raises more questions than it answers.We discuss this report and make some suggestions about h…
Elon's error calculation at Twitter [not-audio_url] [/not-audio_url]

Duration: 27:29
Twitter as we knew is gone. Elon has fired half the full time employees and 80 percent of the contractors. It’s a brutal way to trim excess fat, reset the culture, and establish a loyal band. But is it a good decision? H…
Marina Nitze and Nick Sinai: Hack Your Bureaucracy [not-audio_url] [/not-audio_url]

Duration: 56:50
We all likely want to improve the organizations we work in. We might want to improve the employee experience, improve the customer experience, or be more efficient and effective. But we all likely have had the experience…
Tom Davenport and Steve Miller: Working with AI [not-audio_url] [/not-audio_url]

Duration: 52:38
How will AI change our jobs? Will it replace humans and eliminate jobs? Will it help humans get things done? Will it create new opportunities for new jobs? People often speculate on these topics, doing their best to pred…