Phil Brown — How IPUs are Advancing Machine Intelligence

Phil Brown — How IPUs are Advancing Machine Intelligence

Author: Lukas Biewald May 27, 2021 Duration: 57:10
Phil shares some of the approaches, like sparsity and low precision, behind the breakthrough performance of Graphcore's Intelligence Processing Units (IPUs). --- Phil Brown leads the Applications team at Graphcore, where they're building high-performance machine learning applications for their Intelligence Processing Units (IPUs), new processors specifically designed for AI compute. Connect with Phil: LinkedIn: https://www.linkedin.com/in/philipsbrown/ Twitter: https://twitter.com/phil_s_brown --- 0:00 Sneak peek, intro 1:44 From computational chemistry to Graphcore 5:16 The simulations behind weather prediction 10:54 Measuring improvement in weather prediction systems 15:35 How high performance computing and ML have different needs 19:00 The potential of sparse training 31:08 IPUs and computer architecture for machine learning 39:10 On performance improvements 44:43 The impacts of increasing computing capability 50:24 The ML chicken and egg problem 52:00 The challenges of converging at scale and bringing hardware to market Links Discussed: Rigging the Lottery: Making All Tickets Winners (Evci et al., 2019): https://arxiv.org/abs/1911.11134 Graphcore MK2 Benchmarks: https://www.graphcore.ai/mk2-benchmarks Check out the transcription and discover more awesome ML projects: http://wandb.me/gd-phil-brown --- Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts​​​ Spotify: http://wandb.me/spotify​​ Google Podcasts: http://wandb.me/google-podcasts​​​ YouTube: http://wandb.me/youtube​​​ Soundcloud: http://wandb.me/soundcloud​​ Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack​​​ Check out our Gallery, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/gallery

Lukas Biewald hosts Gradient Dissent: Conversations on AI, a series that moves beyond theoretical discussions to examine how artificial intelligence is actually built and deployed. Each episode features a direct, unscripted talk with a leading practitioner-you’ll hear from engineers and researchers at places like NVIDIA, Meta, Google, Lyft, and OpenAI. The focus is on the tangible challenges and breakthroughs they encounter, from initial research to the complex reality of putting models into production. This isn't about abstract futures; it's a grounded look at the decisions shaping the field right now. Biewald, bringing his perspective from Weights & Biases, steers conversations toward the practical trade-offs and collaborative efforts that define modern AI work. For anyone in technology or business who wants to understand the mechanics behind the headlines, this podcast offers a rare, candid window into the process. You’ll come away with a clearer sense of how ideas become functional systems and what it really takes to operate at the cutting edge.
Author: Language: English Episodes: 100

Gradient Dissent: Conversations on AI
Podcast Episodes
Scaling LLMs and Accelerating Adoption with Aidan Gomez at Cohere [not-audio_url] [/not-audio_url]

Duration: 51:31
On this episode, we’re joined by Aidan Gomez, Co-Founder and CEO at Cohere. Cohere develops and releases a range of innovative AI-powered tools and solutions for a variety of NLP use cases.We discuss:- What “attention” m…
Neural Network Pruning and Training with Jonathan Frankle at MosaicML [not-audio_url] [/not-audio_url]

Duration: 1:02:00
Jonathan Frankle, Chief Scientist at MosaicML and Assistant Professor of Computer Science at Harvard University, joins us on this episode. With comprehensive infrastructure and software tools, MosaicML aims to help busin…
Shreya Shankar — Operationalizing Machine Learning [not-audio_url] [/not-audio_url]

Duration: 54:38
About This EpisodeShreya Shankar is a computer scientist, PhD student in databases at UC Berkeley, and co-author of "Operationalizing Machine Learning: An Interview Study", an ethnographic interview study with 18 machine…
Jeremy Howard — The Simple but Profound Insight Behind Diffusion [not-audio_url] [/not-audio_url]

Duration: 1:12:57
Jeremy Howard is a co-founder of fast.ai, the non-profit research group behind the popular massive open online course "Practical Deep Learning for Coders", and the open source deep learning library "fastai".Jeremy is als…
Jerome Pesenti — Large Language Models, PyTorch, and Meta [not-audio_url] [/not-audio_url]

Duration: 52:35
Jerome Pesenti is the former VP of AI at Meta, a tech conglomerate that includes Facebook, WhatsApp, and Instagram, and one of the most exciting places where AI research is happening today.Jerome shares his thoughts on T…