Avriel Epps: Teaching Kids About AI Bias

Avriel Epps: Teaching Kids About AI Bias

Author: Helen and Dave Edwards July 13, 2025 Duration: 50:51

In this conversation, we explore AI bias, transformative justice, and the future of technology with Dr. Avriel Epps, computational social scientist, Civic Science Postdoctoral Fellow at Cornell University's CATLab, and co-founder of AI for Abolition.

What makes this conversation unique is how it begins with Avriel's recently published children's book, A Kids Book About AI Bias (Penguin Random House), designed for ages 5-9. As an accomplished researcher with a PhD from Harvard and expertise in how algorithmic systems impact identity development, Avriel has taken on the remarkable challenge of translating complex technical concepts about AI bias into accessible language for the youngest learners.

Key themes we explore:

- The Translation Challenge: How to distill graduate-level research on algorithmic bias into concepts a six-year-old can understand—and why kids' unfiltered responses to AI bias reveal truths adults often struggle to articulate

- Critical Digital Literacy: Why building awareness of AI bias early can serve as a protective mechanism for young people who will be most vulnerable to these systems

- AI for Abolition: Avriel's nonprofit work building community power around AI, including developing open-source tools like "Repair" for transformative and restorative justice practitioners

- The Incentive Problem: Why the fundamental issue isn't the technology itself, but the economic structures driving AI development—and how communities might reclaim agency over systems built from their own data

- Generational Perspectives: How different generations approach digital activism, from Gen Z's innovative but potentially ephemeral protest methods to what Gen Alpha might bring to technological resistance

Throughout our conversation, Avriel demonstrates how critical analysis of technology can coexist with practical hope. Her work embodies the belief that while AI currently reinforces existing inequalities, it doesn't have to—if we can change who controls its development and deployment.

The conversation concludes with Avriel's ongoing research into how algorithmic systems shaped public discourse around major social and political events, and their vision for "small tech" solutions that serve communities rather than extracting from them.

For anyone interested in AI ethics, youth development, or the intersection of technology and social justice, this conversation offers both rigorous analysis and genuine optimism about what's possible when we center equity in technological development.

About Dr. Avriel Epps:

Dr. Avriel Epps (she/they) is a computational social scientist and a Civic Science Postdoctoral Fellow at the Cornell University CATLab. She completed her Ph.D. at Harvard University in Education with a concentration in Human Development. She also holds an S.M. in Data Science from Harvard’s School of Engineering and Applied Sciences and a B.A. in Communication Studies from UCLA.

Previously a Ford Foundation predoctoral fellow, Avriel is currently a Fellow at The National Center on Race and Digital Justice, a Roddenberry Fellow, and a Public Voices Fellow on Technology in the Public Interest with the Op-Ed Project in partnership with the MacArthur Foundation.

Avriel is also the co-founder of AI4Abolition, a community organization dedicated to increasing AI literacy in marginalized communities and building community power with and around data-driven technologies. Avriel has been invited to speak at various venues including tech giants like Google and TikTok, and for The U.S. Courts, focusing on algorithmic bias and fairness.

In the Fall of 2025, she will begin her tenure as Assistant Professor of Fair and Responsible Data Science at Rutgers University.

Links:- Dr. Epps' official website: https://www.avrielepps.com

- AI for Abolition: https://www.ai4.org

- A Kids Book About AI Bias details: https://www.avrielepps.com/book


Hosted by Helen and Dave Edwards, Stay Human, from the Artificiality Institute is a conversation that lives in the messy, human space between our tools and our selves. Each episode digs into the subtle ways artificial intelligence is reshaping our daily decisions, our creative impulses, and even our sense of identity. This isn't a technical manual or a series of futuristic predictions; it's a grounded exploration of how we maintain our agency in a world increasingly mediated by algorithms. The podcast operates from a core belief: that our engagement with AI should be about more than just safety or efficiency-it needs to be meaningful and worthwhile. You'll hear discussions rooted in story-based research, where complex ideas about cognition and ethics are unpacked through relatable narratives and real-world examples. The goal is to provide a framework for thoughtful choice, helping each of us consciously design the relationship we want with the machines in our lives. Tuning in offers a chance to step back from the hype and consider how we can actively remain the authors of our own minds, preserving what makes us uniquely human even as the technology evolves. It's an essential listen for anyone curious about the personal and philosophical dimensions of our digital age.
Author: Language: en-us Episodes: 100

Stay Human, from the Artificiality Institute
Podcast Episodes
Megan Brown: Data Literacy [not-audio_url] [/not-audio_url]

Duration: 59:38
All major companies are working to increase the value of data science. Setting a goal may be easy but implementation often raises challenging questions. How should companies think about the role of data scientists, the c…
Peter Sterling: Decision Evolution [not-audio_url] [/not-audio_url]

Duration: 1:13:41
This week we talk with Peter Sterling, the author of What is Health. Peter has had a long career in medicine and neuroscience. He has recently published in Jama Psychiatry, with Michael Platt, on Why Deaths of Despair Ar…
Stephen Fleming: Metacognition [not-audio_url] [/not-audio_url]

Duration: 1:01:40
It’s human to know oneself. We are able to self-monitor, understand our cognition, and recognize gaps in our knowledge. This is called metacognition—we think about how we think. We can think of it as self-awareness or th…
Jevin West: Making Sense of Data [not-audio_url] [/not-audio_url]

Duration: 51:57
Have you ever wondered what it means to be data literate in a world of big data and AI? Now that so many decisions rely on information that is only readable by machine and our statistical intuitions, which were bad befor…
Michael Bungay Stanier: Staying Curious [not-audio_url] [/not-audio_url]

Duration: 43:25
Have you wondered what makes people different from machines? Well one thing is curiosity—curiosity is something that drives humans but as yet not machines. And one person that knows humans and curiosity is Michael Bungay…
Mollie Pettit: Visualizing Data [not-audio_url] [/not-audio_url]

Duration: 41:01
Making decisions with data requires some form of communication with data. But how do we communicate with numbers and characters and binary bits? The best way today is through data visualization. Visualizing data has come…
Josh Lovejoy: Designing AI [not-audio_url] [/not-audio_url]

Duration: 1:27:39
Have you ever wondered about what it takes to design AI that doesn’t do more harm than good? We speak with Josh Lovejoy who is perhaps the most experienced out there in the field of human-centered AI design. At the time…
Kate O'Neill: Humanizing Tech [not-audio_url] [/not-audio_url]

Duration: 43:52
Have you ever wondered what it means to be a humanist in the age of technology? How can we put human values into a machine? How can we even know what those human values are? We asked Kate O’Neill, founder of KO Insights…
Tania Lombrozo: Intuition and data [not-audio_url] [/not-audio_url]

Duration: 49:20
Have you ever wondered why we humans love to use our intuition even when we are surrounded by data and we also know that even simple algorithms can be more accurate than human judgment? We put that exact question to Tani…

«1...678910