Beth Rudden: AI, Trust, and Bast AI

Beth Rudden: AI, Trust, and Bast AI

Author: Helen and Dave Edwards August 17, 2025 Duration: 36:34

Join Beth Rudden at the Artificiality Summit in Bend, Oregon—October 23-25, 2025—to imagine a meaningful life with synthetic intelligence for me, we and us. Learn more here: www.artificialityinstitute.org/summit

In this thought-provoking conversation, we explore the intersection of archaeological thinking and artificial intelligence with Beth Rudden, former IBM Distinguished Engineer and CEO of Bast AI. Beth brings a unique interdisciplinary perspective—combining her training as an archaeologist with over 20 years of enterprise AI experience—to challenge fundamental assumptions about how we build and deploy artificial intelligence systems.

Beth describes her work as creating "the trust layer for civilization," arguing that current AI systems reflect what Hannah Arendt called the "banality of evil"—not malicious intent, but thoughtlessness embedded at scale. As she puts it, "AI is an excavation tool, not a villain," surfacing patterns and biases that humanity has already normalized in our data and language.

Key themes we explore:

  • Archaeological AI: How treating AI as an excavation tool reveals embedded human thoughtlessness, and why scraping random internet data fundamentally misunderstands the nature of knowledge and context
  • Ontological Scaffolding: Beth's approach to building AI systems using formal knowledge graphs and ontologies—giving AI the scaffolding to understand context rather than relying on statistical pattern matching divorced from meaning
  • Data Sovereignty in Healthcare: A detailed exploration of Bast AI's platform for explainable healthcare AI, where patients control their data and can trace every decision back to its source—from emergency logistics to clinical communication
  • The Economics of Expertise: Moving beyond the "humans as resources" paradigm to imagine economic models that compete to support and amplify human expertise rather than eliminate it
  • Embodied Knowledge and Community: Why certain forms of knowledge—surgical skill, caregiving, craftsmanship—are irreducibly embodied, and how AI should scale this expertise rather than replace it
  • Hopeful Rage: Beth's vision for reclaiming humanist spaces and community healing as essential infrastructure for navigating technological transformation


Beth challenges the dominant narrative that AI will simply replace human workers, instead proposing systems designed to "augment and amplify human expertise." Her work at Bast AI demonstrates how explainable AI can maintain full provenance and transparency while reducing cognitive load—allowing healthcare providers to spend more time truly listening to patients rather than wrestling with bureaucratic systems.

The conversation reveals how archaeological thinking—with its attention to context, layers of meaning, and long-term patterns—offers essential insights for building trustworthy AI systems. As Beth notes, "You can fake reading. You cannot fake swimming"—certain forms of embodied knowledge remain irreplaceable and should be the foundation for human-AI collaboration.

About Beth Rudden: Beth Rudden is CEO and Chairwoman of Bast AI, building explainable artificial intelligence systems with full provenance and data sovereignty. A former IBM Distinguished Engineer and Chief Data Officer, she's been recognized as one of the 100 most brilliant leaders in AI Ethics. With her background spanning archaeology, cognitive science, and decades of enterprise AI development, Beth offers a grounded perspective on technology that serves human flourishing rather than replacing it.

This interview was recorded as part of the lead-up to the Artificiality Summit 2025 (October 23-25 in Bend, Oregon), where Beth will be speaking about the future of trustworthy AI.


Hosted by Helen and Dave Edwards, Stay Human, from the Artificiality Institute is a conversation that lives in the messy, human space between our tools and our selves. Each episode digs into the subtle ways artificial intelligence is reshaping our daily decisions, our creative impulses, and even our sense of identity. This isn't a technical manual or a series of futuristic predictions; it's a grounded exploration of how we maintain our agency in a world increasingly mediated by algorithms. The podcast operates from a core belief: that our engagement with AI should be about more than just safety or efficiency-it needs to be meaningful and worthwhile. You'll hear discussions rooted in story-based research, where complex ideas about cognition and ethics are unpacked through relatable narratives and real-world examples. The goal is to provide a framework for thoughtful choice, helping each of us consciously design the relationship we want with the machines in our lives. Tuning in offers a chance to step back from the hype and consider how we can actively remain the authors of our own minds, preserving what makes us uniquely human even as the technology evolves. It's an essential listen for anyone curious about the personal and philosophical dimensions of our digital age.
Author: Language: en-us Episodes: 100

Stay Human, from the Artificiality Institute
Podcast Episodes
Megan Brown: Data Literacy [not-audio_url] [/not-audio_url]

Duration: 59:38
All major companies are working to increase the value of data science. Setting a goal may be easy but implementation often raises challenging questions. How should companies think about the role of data scientists, the c…
Peter Sterling: Decision Evolution [not-audio_url] [/not-audio_url]

Duration: 1:13:41
This week we talk with Peter Sterling, the author of What is Health. Peter has had a long career in medicine and neuroscience. He has recently published in Jama Psychiatry, with Michael Platt, on Why Deaths of Despair Ar…
Stephen Fleming: Metacognition [not-audio_url] [/not-audio_url]

Duration: 1:01:40
It’s human to know oneself. We are able to self-monitor, understand our cognition, and recognize gaps in our knowledge. This is called metacognition—we think about how we think. We can think of it as self-awareness or th…
Jevin West: Making Sense of Data [not-audio_url] [/not-audio_url]

Duration: 51:57
Have you ever wondered what it means to be data literate in a world of big data and AI? Now that so many decisions rely on information that is only readable by machine and our statistical intuitions, which were bad befor…
Michael Bungay Stanier: Staying Curious [not-audio_url] [/not-audio_url]

Duration: 43:25
Have you wondered what makes people different from machines? Well one thing is curiosity—curiosity is something that drives humans but as yet not machines. And one person that knows humans and curiosity is Michael Bungay…
Mollie Pettit: Visualizing Data [not-audio_url] [/not-audio_url]

Duration: 41:01
Making decisions with data requires some form of communication with data. But how do we communicate with numbers and characters and binary bits? The best way today is through data visualization. Visualizing data has come…
Josh Lovejoy: Designing AI [not-audio_url] [/not-audio_url]

Duration: 1:27:39
Have you ever wondered about what it takes to design AI that doesn’t do more harm than good? We speak with Josh Lovejoy who is perhaps the most experienced out there in the field of human-centered AI design. At the time…
Kate O'Neill: Humanizing Tech [not-audio_url] [/not-audio_url]

Duration: 43:52
Have you ever wondered what it means to be a humanist in the age of technology? How can we put human values into a machine? How can we even know what those human values are? We asked Kate O’Neill, founder of KO Insights…
Tania Lombrozo: Intuition and data [not-audio_url] [/not-audio_url]

Duration: 49:20
Have you ever wondered why we humans love to use our intuition even when we are surrounded by data and we also know that even simple algorithms can be more accurate than human judgment? We put that exact question to Tani…

«1...678910