DeepSeek: What Happened, What Matters, 
and Why It’s Interesting

DeepSeek: What Happened, What Matters, 
and Why It’s Interesting

Author: Helen and Dave Edwards January 28, 2025 Duration: 25:58

First:

- Apologies for the audio! We had a production error…


What’s new:

- DeepSeek has created breakthroughs in both: How AI systems are trained (making it much more affordable) and how they run in real-world use (making them faster and more efficient)


Details

- FP8 Training: Working With Less Precise Numbers

- Traditional AI training requires extremely precise numbers

- DeepSeek found you can use less precise numbers (like rounding $10.857643 to $10.86)

- Cut memory and computation needs significantly with minimal impact

- Like teaching someone math using rounded numbers instead of carrying every decimal place

- Learning from Other AIs (Distillation)

- Traditional approach: AI learns everything from scratch by studying massive amounts of data

- DeepSeek's approach: Use existing AI models as teachers

- Like having experienced programmers mentor new developers:

- Trial & Error Learning (for their R1 model)

- Started with some basic "tutoring" from advanced models

- Then let it practice solving problems on its own

- When it found good solutions, these were fed back into training

- Led to "Aha moments" where R1 discovered better ways to solve problems

- Finally, polished its ability to explain its thinking clearly to humans

- Smart Team Management (Mixture of Experts)

- Instead of one massive system that does everything, built a team of specialists

- Like running a software company with:

- 256 specialists who focus on different areas

- 1 generalist who helps with everything

- Smart project manager who assigns work efficiently

- For each task, only need 8 specialists plus the generalist

- More efficient than having everyone work on everything

- Efficient Memory Management (Multi-head Latent Attention)

- Traditional AI is like keeping complete transcripts of every conversation

- DeepSeek's approach is like taking smart meeting minutes

- Captures key information in compressed format

- Similar to how JPEG compresses images

- Looking Ahead (Multi-Token Prediction)

- Traditional AI reads one word at a time

- DeepSeek looks ahead and predicts two words at once

- Like a skilled reader who can read ahead while maintaining comprehension


Why This Matters

- Cost Revolution: Training costs of $5.6M (vs hundreds of millions) suggests a future where AI development isn't limited to tech giants.

- Working Around Constraints: Shows how limitations can drive innovation—DeepSeek achieved state-of-the-art results without access to the most powerful chips (at least that’s the best conclusion at the moment).


What’s Interesting

- Efficiency vs Power: Challenges the assumption that advancing AI requires ever-increasing computing power - sometimes smarter engineering beats raw force.

- Self-Teaching AI: R1's ability to develop reasoning capabilities through pure reinforcement learning suggests AIs can discover problem-solving methods on their own.

- AI Teaching AI: The success of distillation shows how knowledge can be transferred between AI models, potentially leading to compounding improvements over time.

- IP for Free: If DeepSeek can be such a fast follower through distillation, what’s the advantage of OpenAI, Google, or another company to release a novel model?


Hosted by Helen and Dave Edwards, Stay Human, from the Artificiality Institute is a conversation that lives in the messy, human space between our tools and our selves. Each episode digs into the subtle ways artificial intelligence is reshaping our daily decisions, our creative impulses, and even our sense of identity. This isn't a technical manual or a series of futuristic predictions; it's a grounded exploration of how we maintain our agency in a world increasingly mediated by algorithms. The podcast operates from a core belief: that our engagement with AI should be about more than just safety or efficiency-it needs to be meaningful and worthwhile. You'll hear discussions rooted in story-based research, where complex ideas about cognition and ethics are unpacked through relatable narratives and real-world examples. The goal is to provide a framework for thoughtful choice, helping each of us consciously design the relationship we want with the machines in our lives. Tuning in offers a chance to step back from the hype and consider how we can actively remain the authors of our own minds, preserving what makes us uniquely human even as the technology evolves. It's an essential listen for anyone curious about the personal and philosophical dimensions of our digital age.
Author: Language: en-us Episodes: 100

Stay Human, from the Artificiality Institute
Podcast Episodes
Don Norman: Design for a Better World [not-audio_url] [/not-audio_url]

Duration: 1:03:51
What role does design have in solving the world’s biggest problems? What can designers add? Some would say that designers played a role in getting us into our current mess. Can they also get us out of it? How can we desi…
Jamer Hunt: Not to Scale [not-audio_url] [/not-audio_url]

Duration: 1:08:32
What are the cause and effect of my actions? How do I know the effect of the small acts in my life? How can I identify opportunities to have impact that is much larger than myself? How can we make problems that seem over…
David Krakauer: Complexity [not-audio_url] [/not-audio_url]

Duration: 1:34:34
We’re always looking for new ideas from science that we can use in our work. Over the past few years, we have been researching new ways to handle increasing complexity in the world and how to solve complex problems. Why…
Generative AI: ChatGPT, DALL-E, Stable Diffusion, and the rest [not-audio_url] [/not-audio_url]

Duration: 29:43
Everyone’s talking about it so we will too. Generative AI is taking the world by storm. But is it a good storm or a scary storm? How should individuals think about what’s possible? What about companies? Our take: generat…
Kees Dorst: Frame Innovation [not-audio_url] [/not-audio_url]

Duration: 1:02:50
What can we learn from the practice of design? What might we learn if we had an insight into top designers’ minds? How might we apply the best practices of designers beyond the field of design itself? Most of our listene…
No-duhs and some surprises [not-audio_url] [/not-audio_url]

Duration: 26:37
The latest Big Ideas report from MIT Sloan and BCG makes for an interesting read but contains flaws, obvious conclusions, and raises more questions than it answers.We discuss this report and make some suggestions about h…
Elon's error calculation at Twitter [not-audio_url] [/not-audio_url]

Duration: 27:29
Twitter as we knew is gone. Elon has fired half the full time employees and 80 percent of the contractors. It’s a brutal way to trim excess fat, reset the culture, and establish a loyal band. But is it a good decision? H…
Marina Nitze and Nick Sinai: Hack Your Bureaucracy [not-audio_url] [/not-audio_url]

Duration: 56:50
We all likely want to improve the organizations we work in. We might want to improve the employee experience, improve the customer experience, or be more efficient and effective. But we all likely have had the experience…
Tom Davenport and Steve Miller: Working with AI [not-audio_url] [/not-audio_url]

Duration: 52:38
How will AI change our jobs? Will it replace humans and eliminate jobs? Will it help humans get things done? Will it create new opportunities for new jobs? People often speculate on these topics, doing their best to pred…