Avriel Epps: Teaching Kids About AI Bias

Avriel Epps: Teaching Kids About AI Bias

Author: Helen and Dave Edwards July 13, 2025 Duration: 50:51

In this conversation, we explore AI bias, transformative justice, and the future of technology with Dr. Avriel Epps, computational social scientist, Civic Science Postdoctoral Fellow at Cornell University's CATLab, and co-founder of AI for Abolition.

What makes this conversation unique is how it begins with Avriel's recently published children's book, A Kids Book About AI Bias (Penguin Random House), designed for ages 5-9. As an accomplished researcher with a PhD from Harvard and expertise in how algorithmic systems impact identity development, Avriel has taken on the remarkable challenge of translating complex technical concepts about AI bias into accessible language for the youngest learners.

Key themes we explore:

- The Translation Challenge: How to distill graduate-level research on algorithmic bias into concepts a six-year-old can understand—and why kids' unfiltered responses to AI bias reveal truths adults often struggle to articulate

- Critical Digital Literacy: Why building awareness of AI bias early can serve as a protective mechanism for young people who will be most vulnerable to these systems

- AI for Abolition: Avriel's nonprofit work building community power around AI, including developing open-source tools like "Repair" for transformative and restorative justice practitioners

- The Incentive Problem: Why the fundamental issue isn't the technology itself, but the economic structures driving AI development—and how communities might reclaim agency over systems built from their own data

- Generational Perspectives: How different generations approach digital activism, from Gen Z's innovative but potentially ephemeral protest methods to what Gen Alpha might bring to technological resistance

Throughout our conversation, Avriel demonstrates how critical analysis of technology can coexist with practical hope. Her work embodies the belief that while AI currently reinforces existing inequalities, it doesn't have to—if we can change who controls its development and deployment.

The conversation concludes with Avriel's ongoing research into how algorithmic systems shaped public discourse around major social and political events, and their vision for "small tech" solutions that serve communities rather than extracting from them.

For anyone interested in AI ethics, youth development, or the intersection of technology and social justice, this conversation offers both rigorous analysis and genuine optimism about what's possible when we center equity in technological development.

About Dr. Avriel Epps:

Dr. Avriel Epps (she/they) is a computational social scientist and a Civic Science Postdoctoral Fellow at the Cornell University CATLab. She completed her Ph.D. at Harvard University in Education with a concentration in Human Development. She also holds an S.M. in Data Science from Harvard’s School of Engineering and Applied Sciences and a B.A. in Communication Studies from UCLA.

Previously a Ford Foundation predoctoral fellow, Avriel is currently a Fellow at The National Center on Race and Digital Justice, a Roddenberry Fellow, and a Public Voices Fellow on Technology in the Public Interest with the Op-Ed Project in partnership with the MacArthur Foundation.

Avriel is also the co-founder of AI4Abolition, a community organization dedicated to increasing AI literacy in marginalized communities and building community power with and around data-driven technologies. Avriel has been invited to speak at various venues including tech giants like Google and TikTok, and for The U.S. Courts, focusing on algorithmic bias and fairness.

In the Fall of 2025, she will begin her tenure as Assistant Professor of Fair and Responsible Data Science at Rutgers University.

Links:- Dr. Epps' official website: https://www.avrielepps.com

- AI for Abolition: https://www.ai4.org

- A Kids Book About AI Bias details: https://www.avrielepps.com/book


Hosted by Helen and Dave Edwards, Stay Human, from the Artificiality Institute is a conversation that lives in the messy, human space between our tools and our selves. Each episode digs into the subtle ways artificial intelligence is reshaping our daily decisions, our creative impulses, and even our sense of identity. This isn't a technical manual or a series of futuristic predictions; it's a grounded exploration of how we maintain our agency in a world increasingly mediated by algorithms. The podcast operates from a core belief: that our engagement with AI should be about more than just safety or efficiency-it needs to be meaningful and worthwhile. You'll hear discussions rooted in story-based research, where complex ideas about cognition and ethics are unpacked through relatable narratives and real-world examples. The goal is to provide a framework for thoughtful choice, helping each of us consciously design the relationship we want with the machines in our lives. Tuning in offers a chance to step back from the hype and consider how we can actively remain the authors of our own minds, preserving what makes us uniquely human even as the technology evolves. It's an essential listen for anyone curious about the personal and philosophical dimensions of our digital age.
Author: Language: en-us Episodes: 100

Stay Human, from the Artificiality Institute
Podcast Episodes
Don Norman: Design for a Better World [not-audio_url] [/not-audio_url]

Duration: 1:03:51
What role does design have in solving the world’s biggest problems? What can designers add? Some would say that designers played a role in getting us into our current mess. Can they also get us out of it? How can we desi…
Jamer Hunt: Not to Scale [not-audio_url] [/not-audio_url]

Duration: 1:08:32
What are the cause and effect of my actions? How do I know the effect of the small acts in my life? How can I identify opportunities to have impact that is much larger than myself? How can we make problems that seem over…
David Krakauer: Complexity [not-audio_url] [/not-audio_url]

Duration: 1:34:34
We’re always looking for new ideas from science that we can use in our work. Over the past few years, we have been researching new ways to handle increasing complexity in the world and how to solve complex problems. Why…
Generative AI: ChatGPT, DALL-E, Stable Diffusion, and the rest [not-audio_url] [/not-audio_url]

Duration: 29:43
Everyone’s talking about it so we will too. Generative AI is taking the world by storm. But is it a good storm or a scary storm? How should individuals think about what’s possible? What about companies? Our take: generat…
Kees Dorst: Frame Innovation [not-audio_url] [/not-audio_url]

Duration: 1:02:50
What can we learn from the practice of design? What might we learn if we had an insight into top designers’ minds? How might we apply the best practices of designers beyond the field of design itself? Most of our listene…
No-duhs and some surprises [not-audio_url] [/not-audio_url]

Duration: 26:37
The latest Big Ideas report from MIT Sloan and BCG makes for an interesting read but contains flaws, obvious conclusions, and raises more questions than it answers.We discuss this report and make some suggestions about h…
Elon's error calculation at Twitter [not-audio_url] [/not-audio_url]

Duration: 27:29
Twitter as we knew is gone. Elon has fired half the full time employees and 80 percent of the contractors. It’s a brutal way to trim excess fat, reset the culture, and establish a loyal band. But is it a good decision? H…
Marina Nitze and Nick Sinai: Hack Your Bureaucracy [not-audio_url] [/not-audio_url]

Duration: 56:50
We all likely want to improve the organizations we work in. We might want to improve the employee experience, improve the customer experience, or be more efficient and effective. But we all likely have had the experience…
Tom Davenport and Steve Miller: Working with AI [not-audio_url] [/not-audio_url]

Duration: 52:38
How will AI change our jobs? Will it replace humans and eliminate jobs? Will it help humans get things done? Will it create new opportunities for new jobs? People often speculate on these topics, doing their best to pred…