Luis Ceze — Accelerating Machine Learning Systems

Luis Ceze — Accelerating Machine Learning Systems

Author: Lukas Biewald June 24, 2021 Duration: 48:28
From Apache TVM to OctoML, Luis gives direct insight into the world of ML hardware optimization, and where systems optimization is heading. --- Luis Ceze is co-founder and CEO of OctoML, co-author of the Apache TVM Project, and Professor of Computer Science and Engineering at the University of Washington. His research focuses on the intersection of computer architecture, programming languages, machine learning, and molecular biology. Connect with Luis: 📍 Twitter: https://twitter.com/luisceze 📍 University of Washington profile: https://homes.cs.washington.edu/~luisceze/ --- ⏳ Timestamps: 0:00 Intro and sneak peek 0:59 What is TVM? 8:57 Freedom of choice in software and hardware stacks 15:53 How new libraries can improve system performance 20:10 Trade-offs between efficiency and complexity 24:35 Specialized instructions 26:34 The future of hardware design and research 30:03 Where does architecture and research go from here? 30:56 The environmental impact of efficiency 32:49 Optimizing and trade-offs 37:54 What is OctoML and the Octomizer? 42:31 Automating systems design with and for ML 44:18 ML and molecular biology 46:09 The challenges of deployment and post-deployment 🌟 Transcript: http://wandb.me/gd-luis-ceze 🌟 Links: 1. OctoML: https://octoml.ai/ 2. Apache TVM: https://tvm.apache.org/ 3. "Scalable and Intelligent Learning Systems" (Chen, 2019): https://digital.lib.washington.edu/researchworks/handle/1773/44766 4. "Principled Optimization Of Dynamic Neural Networks" (Roesch, 2020): https://digital.lib.washington.edu/researchworks/handle/1773/46765 5. "Cross-Stack Co-Design for Efficient and Adaptable Hardware Acceleration" (Moreau, 2018): https://digital.lib.washington.edu/researchworks/handle/1773/43349 6. "TVM: An Automated End-to-End Optimizing Compiler for Deep Learning" (Chen et al., 2018): https://www.usenix.org/system/files/osdi18-chen.pdf 7. Porcupine is a molecular tagging system introduced in "Rapid and robust assembly and decoding of molecular tags with DNA-based nanopore signatures" (Doroschak et al., 2020): https://www.nature.com/articles/s41467-020-19151-8 --- Get our podcast on these platforms: 👉 Apple Podcasts: http://wandb.me/apple-podcasts​​ 👉 Spotify: http://wandb.me/spotify​ 👉 Google Podcasts: http://wandb.me/google-podcasts​​ 👉 YouTube: http://wandb.me/youtube​​ 👉 Soundcloud: http://wandb.me/soundcloud​ Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack​​ Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/fully-connected

Lukas Biewald hosts Gradient Dissent: Conversations on AI, a series that moves beyond theoretical discussions to examine how artificial intelligence is actually built and deployed. Each episode features a direct, unscripted talk with a leading practitioner-you’ll hear from engineers and researchers at places like NVIDIA, Meta, Google, Lyft, and OpenAI. The focus is on the tangible challenges and breakthroughs they encounter, from initial research to the complex reality of putting models into production. This isn't about abstract futures; it's a grounded look at the decisions shaping the field right now. Biewald, bringing his perspective from Weights & Biases, steers conversations toward the practical trade-offs and collaborative efforts that define modern AI work. For anyone in technology or business who wants to understand the mechanics behind the headlines, this podcast offers a rare, candid window into the process. You’ll come away with a clearer sense of how ideas become functional systems and what it really takes to operate at the cutting edge.
Author: Language: English Episodes: 100

Gradient Dissent: Conversations on AI
Podcast Episodes
Evaluating LLMs with Chatbot Arena and Joseph E. Gonzalez [not-audio_url] [/not-audio_url]

Duration: 55:32
In this episode of Gradient Dissent, Joseph E. Gonzalez, EECS Professor at UC Berkeley and Co-Founder at RunLLM, joins host Lukas Biewald to explore innovative approaches to evaluating LLMs.They discuss the concept of vi…
Snowflake’s CEO Sridhar Ramaswamy on 700+ LLM enterprise use cases [not-audio_url] [/not-audio_url]

Duration: 55:42
In this episode of Gradient Dissent, Snowflake CEO Sridhar Ramaswamy joins host Lukas Biewald to explore how AI is transforming enterprise data strategies.They discuss Sridhar's journey from Google to Snowflake, diving i…
Elevating ML Infrastructure with Modal Labs CEO Erik Bernhardsson [not-audio_url] [/not-audio_url]

Duration: 49:39
In this episode of Gradient Dissent, Erik Bernhardsson, CEO & Founder of Modal Labs, joins host Lukas Biewald to discuss the future of machine learning infrastructure. They explore how Modal is enhancing the developer ex…
From No-Code to AI-Powered Apps with Airtable’s Howie Liu [not-audio_url] [/not-audio_url]

Duration: 1:12:57
In this episode of Gradient Dissent, Howie Lou, CEO of Airtable, joins host Lukas Biewald to dive into Airtable's transformation from a no-code app builder to a platform capable of supporting complex AI-driven workflows.…
Reinventing AI Agents with Imbue CEO Kanjun Qiu [not-audio_url] [/not-audio_url]

Duration: 48:37
In this episode of Gradient Dissent, Kanjun Qiu, CEO and Co-founder of Imbue, joins host Lukas Biewald to discuss how AI agents are transforming code generation and software development. Discover the potential impact and…
From startup to $1.2B with Lambda’s Stephen Balaban [not-audio_url] [/not-audio_url]

Duration: 49:56
In this episode of Gradient Dissent, Stephen Balaban, CEO of Lambda Labs, joins host Lukas Biewald to discuss the journey of scaling Lambda Labs to an impressive $400M in revenue. They explore the pivotal moments that sh…