Hugo Larochelle: Deep Learning as Science

Hugo Larochelle: Deep Learning as Science

Author: Daniel Bashir July 6, 2023 Duration: 1:48:28

In episode 80 of The Gradient Podcast, Daniel Bashir speaks to Professor Hugo Larochelle.

Professor Larochelle leads the Montreal Google DeepMind team and is adjunct professor at Université de Montréal and a Canada CIFAR Chair. His research focuses on the study and development of deep learning algorithms.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (01:38) Prof. Larochelle’s background, working in Bengio’s lab

* (04:53) Prof. Larochelle’s work and connectionism

* (08:20) 2004-2009, work with Bengio

* (08:40) Nonlocal Estimation of Manifold Structure, manifolds and deep learning

* (13:58) Manifold learning in vision and language

* (16:00) Relationship to Denoising Autoencoders and greedy layer-wise pretraining

* (21:00) From input copying to learning about local distribution structure

* (22:30) Zero-Data Learning of New Tasks

* (22:45) The phrase “extend machine learning towards AI” and terminology

* (26:55) Prescient hints of prompt engineering

* (29:10) Daniel goes on totally unnecessary tangent

* (30:00) Methods for training deep networks (strategies and robust interdependent codes)

* (33:45) Motivations for layer-wise pretraining

* (35:15) Robust Interdependent Codes and interactions between neurons in a single network layer

* (39:00) 2009-2011, postdoc in Geoff Hinton’s lab

* (40:00) Reflections on the AlexNet moment

* (41:45) Frustration with methods for evaluating unsupervised methods, NADE

* (44:45) How researchers thought about representation learning, toying with objectives instead of architectures

* (47:40) The Restricted Boltzmann Forest

* (50:45) Imposing structure for tractable learning of distributions

* (53:11) 2011-2016 at U Sherbooke (and Twitter)

* (53:45) How Prof. Larochelle approached research problems

* (56:00) How Domain Adversarial Networks came about

* (57:12) Can we still learn from Restricted Boltzmann Machines?

* (1:02:20) The ~ Infinite ~ Restricted Boltzmann Machine

* (1:06:55) The need for researchers doing different sorts of work

* (1:08:58) 2017-present, at MILA (and Google)

* (1:09:30) Modulating Early Visual Processing by Language, neuroscientific inspiration

* (1:13:22) Representation learning and generalization, what is a good representation (Meta-Dataset, Universal representation transformer layer, universal template, Head2Toe)

* (1:15:10) Meta-Dataset motivation

* (1:18:00) Shifting focus to the problem—good practices for “recycling deep learning”

* (1:19:15) Head2Toe intuitions

* (1:21:40) What are “universal representations” and manifold perspective on datasets, what is the right pretraining dataset

* (1:26:02) Prof. Larochelle’s takeaways from Fortuitous Forgetting in Connectionist Networks (led by Hattie Zhou)

* (1:32:15) Obligatory commentary on The Present Moment and current directions in ML

* (1:36:18) The creation and motivations of the TMLR journal

* (1:41:48) Prof. Larochelle’s takeaways about doing good science, building research groups, and nurturing a research environment

* (1:44:05) Prof. Larochelle’s advice for aspiring researchers today

* (1:47:41) Outro

Links:

* Professor Larochelle’s homepage and Twitter

* Transactions on Machine Learning Research

* Papers

* 2004-2009

* Nonlocal Estimation of Manifold Structure

* Classification using Discriminative Restricted Boltzmann Machines

* Zero-data learning of new tasks

* Exploring Strategies for Training Deep Neural Networks

* Deep Learning using Robust Interdependent Codes

* 2009-2011

* Stacked Denoising Autoencoders

* Tractable multivariate binary density estimation and the restricted Boltzmann forest

* The Neural Autoregressive Distribution Estimator

* Learning Attentional Policies for Tracking and Recognition in Video with Deep Networks

* 2011-2016

* Practical Bayesian Optimization of Machine Learning Algorithms

* Learning Algorithms for the Classification Restricted Boltzmann Machine

* A neural autoregressive topic model

* Domain-Adversarial Training of Neural Networks

* NADE

* An Infinite Restricted Boltzmann Machine

* 2017-present

* Modulating early visual processing by language

* Meta-Dataset

* A Universal Representation Transformer Layer for Few-Shot Image Classification

* Learning a universal template for few-shot dataset generalization

* Impact of aliasing on generalization in deep convolutional networks

* Head2Toe: Utilizing Intermediate Representations for Better Transfer Learning

* Fortuitous Forgetting in Connectionist Networks



Get full access to The Gradient at thegradientpub.substack.com/subscribe

Hosted by Daniel Bashir, The Gradient: Perspectives on AI moves beyond surface-level headlines to explore the intricate machinery and human ideas shaping artificial intelligence. Each episode is built on a foundation of deep research, leading to conversations that are both technically substantive and broadly accessible. You'll hear from researchers, engineers, and philosophers who are actively building and critiquing our technological future, discussing not just how AI systems work, but the larger implications of their integration into society. This isn't about speculative hype; it's a grounded examination of real progress, persistent challenges, and ethical considerations from those on the front lines. The discussions peel back layers on topics like model architecture, policy, and the fundamental science behind the algorithms becoming part of our daily lives. For anyone curious about the substance behind the buzz-whether you have a technical background or are simply keen to understand a defining technology of our age-this podcast offers a crucial and thoughtful resource. Tune in for a consistently detailed and nuanced take that treats artificial intelligence with the complexity it deserves.
Author: Language: English Episodes: 100

The Gradient: Perspectives on AI
Podcast Episodes
2025 in AI, with Nathan Benaich [not-audio_url] [/not-audio_url]

Duration: 1:01:15
Episode 144Happy New Year! This is one of my favorite episodes of the year — for the fourth time, Nathan Benaich and I did our yearly roundup of AI news and advancements, including selections from this year’s State of AI…
Iason Gabriel: Value Alignment and the Ethics of Advanced AI Systems [not-audio_url] [/not-audio_url]

Duration: 58:39
Episode 143I spoke with Iason Gabriel about:* Value alignment* Technology and worldmaking* How AI systems affect individuals and the social worldIason is a philosopher and Senior Staff Research Scientist at Google DeepMi…
2024 in AI, with Nathan Benaich [not-audio_url] [/not-audio_url]

Duration: 1:48:43
Episode 142Happy holidays! This is one of my favorite episodes of the year — for the third time, Nathan Benaich and I did our yearly roundup of all the AI news and advancements you need to know. This includes selections…
Philip Goff: Panpsychism as a Theory of Consciousness [not-audio_url] [/not-audio_url]

Duration: 1:00:04
Episode 141I spoke with Professor Philip Goff about:* What a “post-Galilean” science of consciousness looks like* How panpsychism helps explain consciousness and the hybrid cosmopsychist viewEnjoy!Philip Goff is a Britis…
Some Changes at The Gradient [not-audio_url] [/not-audio_url]

Duration: 34:25
Hi everyone!If you’re a new subscriber or listener, welcome. If you’re not new, you’ve probably noticed that things have slowed down from us a bit recently. Hugh Zhang, Andrey Kurenkov and I sat down to recap some of The…
Jacob Andreas: Language, Grounding, and World Models [not-audio_url] [/not-audio_url]

Duration: 1:52:43
Episode 140I spoke with Professor Jacob Andreas about:* Language and the world* World models* How he’s developed as a scientistEnjoy!Jacob is an associate professor at MIT in the Department of Electrical Engineering and…
Evan Ratliff: Our Future with Voice Agents [not-audio_url] [/not-audio_url]

Duration: 1:19:59
Episode 139I spoke with Evan Ratliff about:* Shell Game, Evan’s new podcast, where he creates an AI voice clone of himself and sets it loose. * The end of the Longform Podcast and his thoughts on the state of journalism.…
Meredith Ringel Morris: Generative AI's HCI Moment [not-audio_url] [/not-audio_url]

Duration: 1:37:45
Episode 138I spoke with Meredith Morris about:* The intersection of AI and HCI and why we need more cross-pollination between AI and adjacent fields* Disability studies and AI* Generative ghosts and technological determi…
Davidad Dalrymple: Towards Provably Safe AI [not-audio_url] [/not-audio_url]

Duration: 1:20:50
Episode 137I spoke with Davidad Dalrymple about:* His perspectives on AI risk* ARIA (the UK’s Advanced Research and Invention Agency) and its Safeguarded AI ProgrammeEnjoy—and let me know what you think!Davidad is a Prog…
Clive Thompson: Tales of Technology [not-audio_url] [/not-audio_url]

Duration: 2:27:35
Episode 136I spoke with Clive Thompson about:* How he writes* Writing about the climate and biking across the US* Technology culture and persistent debates in AI* PoetryEnjoy—and let me know what you think!Clive is a jou…