Hugo Larochelle: Deep Learning as Science

Hugo Larochelle: Deep Learning as Science

Author: Daniel Bashir July 6, 2023 Duration: 1:48:28

In episode 80 of The Gradient Podcast, Daniel Bashir speaks to Professor Hugo Larochelle.

Professor Larochelle leads the Montreal Google DeepMind team and is adjunct professor at Université de Montréal and a Canada CIFAR Chair. His research focuses on the study and development of deep learning algorithms.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (01:38) Prof. Larochelle’s background, working in Bengio’s lab

* (04:53) Prof. Larochelle’s work and connectionism

* (08:20) 2004-2009, work with Bengio

* (08:40) Nonlocal Estimation of Manifold Structure, manifolds and deep learning

* (13:58) Manifold learning in vision and language

* (16:00) Relationship to Denoising Autoencoders and greedy layer-wise pretraining

* (21:00) From input copying to learning about local distribution structure

* (22:30) Zero-Data Learning of New Tasks

* (22:45) The phrase “extend machine learning towards AI” and terminology

* (26:55) Prescient hints of prompt engineering

* (29:10) Daniel goes on totally unnecessary tangent

* (30:00) Methods for training deep networks (strategies and robust interdependent codes)

* (33:45) Motivations for layer-wise pretraining

* (35:15) Robust Interdependent Codes and interactions between neurons in a single network layer

* (39:00) 2009-2011, postdoc in Geoff Hinton’s lab

* (40:00) Reflections on the AlexNet moment

* (41:45) Frustration with methods for evaluating unsupervised methods, NADE

* (44:45) How researchers thought about representation learning, toying with objectives instead of architectures

* (47:40) The Restricted Boltzmann Forest

* (50:45) Imposing structure for tractable learning of distributions

* (53:11) 2011-2016 at U Sherbooke (and Twitter)

* (53:45) How Prof. Larochelle approached research problems

* (56:00) How Domain Adversarial Networks came about

* (57:12) Can we still learn from Restricted Boltzmann Machines?

* (1:02:20) The ~ Infinite ~ Restricted Boltzmann Machine

* (1:06:55) The need for researchers doing different sorts of work

* (1:08:58) 2017-present, at MILA (and Google)

* (1:09:30) Modulating Early Visual Processing by Language, neuroscientific inspiration

* (1:13:22) Representation learning and generalization, what is a good representation (Meta-Dataset, Universal representation transformer layer, universal template, Head2Toe)

* (1:15:10) Meta-Dataset motivation

* (1:18:00) Shifting focus to the problem—good practices for “recycling deep learning”

* (1:19:15) Head2Toe intuitions

* (1:21:40) What are “universal representations” and manifold perspective on datasets, what is the right pretraining dataset

* (1:26:02) Prof. Larochelle’s takeaways from Fortuitous Forgetting in Connectionist Networks (led by Hattie Zhou)

* (1:32:15) Obligatory commentary on The Present Moment and current directions in ML

* (1:36:18) The creation and motivations of the TMLR journal

* (1:41:48) Prof. Larochelle’s takeaways about doing good science, building research groups, and nurturing a research environment

* (1:44:05) Prof. Larochelle’s advice for aspiring researchers today

* (1:47:41) Outro

Links:

* Professor Larochelle’s homepage and Twitter

* Transactions on Machine Learning Research

* Papers

* 2004-2009

* Nonlocal Estimation of Manifold Structure

* Classification using Discriminative Restricted Boltzmann Machines

* Zero-data learning of new tasks

* Exploring Strategies for Training Deep Neural Networks

* Deep Learning using Robust Interdependent Codes

* 2009-2011

* Stacked Denoising Autoencoders

* Tractable multivariate binary density estimation and the restricted Boltzmann forest

* The Neural Autoregressive Distribution Estimator

* Learning Attentional Policies for Tracking and Recognition in Video with Deep Networks

* 2011-2016

* Practical Bayesian Optimization of Machine Learning Algorithms

* Learning Algorithms for the Classification Restricted Boltzmann Machine

* A neural autoregressive topic model

* Domain-Adversarial Training of Neural Networks

* NADE

* An Infinite Restricted Boltzmann Machine

* 2017-present

* Modulating early visual processing by language

* Meta-Dataset

* A Universal Representation Transformer Layer for Few-Shot Image Classification

* Learning a universal template for few-shot dataset generalization

* Impact of aliasing on generalization in deep convolutional networks

* Head2Toe: Utilizing Intermediate Representations for Better Transfer Learning

* Fortuitous Forgetting in Connectionist Networks



Get full access to The Gradient at thegradientpub.substack.com/subscribe

Hosted by Daniel Bashir, The Gradient: Perspectives on AI moves beyond surface-level headlines to explore the intricate machinery and human ideas shaping artificial intelligence. Each episode is built on a foundation of deep research, leading to conversations that are both technically substantive and broadly accessible. You'll hear from researchers, engineers, and philosophers who are actively building and critiquing our technological future, discussing not just how AI systems work, but the larger implications of their integration into society. This isn't about speculative hype; it's a grounded examination of real progress, persistent challenges, and ethical considerations from those on the front lines. The discussions peel back layers on topics like model architecture, policy, and the fundamental science behind the algorithms becoming part of our daily lives. For anyone curious about the substance behind the buzz-whether you have a technical background or are simply keen to understand a defining technology of our age-this podcast offers a crucial and thoughtful resource. Tune in for a consistently detailed and nuanced take that treats artificial intelligence with the complexity it deserves.
Author: Language: English Episodes: 100

The Gradient: Perspectives on AI
Podcast Episodes
Stevan Harnad: AI's Symbol Grounding Problem [not-audio_url] [/not-audio_url]

Duration: 1:58:21
In episode 88 of The Gradient Podcast, Daniel Bashir speaks to Professor Stevan Harnad.Stevan Harnad is professor of psychology and cognitive science at Université du Québec à Montréal, adjunct professor of cognitive sci…
Terry Winograd: AI, HCI, Language, and Cognition [not-audio_url] [/not-audio_url]

Duration: 1:33:21
In episode 87 of The Gradient Podcast, Daniel Bashir speaks to Professor Terry Winograd. Professor Winograd is Professor Emeritus of Computer Science at Stanford University. His research focuses on human-computer interac…
Gil Strang: Linear Algebra and Deep Learning [not-audio_url] [/not-audio_url]

Duration: 1:00:36
In episode 86 of The Gradient Podcast, Daniel Bashir speaks to Professor Gil Strang. Professor Strang is one of the world’s foremost mathematics educators and a mathematician with contributions to finite element theory,…
Anant Agarwal: AI for Education [not-audio_url] [/not-audio_url]

Duration: 47:40
In episode 85 of The Gradient Podcast, Andrey Kurenkov speaks to Anant AgarwalAnant Agarwal is the chief platform officer of 2U, and founder of edX. Anant taught the first edX course on circuits and electronics from MIT,…
Peli Grietzer: A Mathematized Philosophy of Literature [not-audio_url] [/not-audio_url]

Duration: 2:33:33
In episode 83 of The Gradient Podcast, Daniel Bashir speaks to Peli Grietzer. Peli is a scholar whose work borrows mathematical ideas from machine learning theory to think through “ambient” and ineffable phenomena like m…
Ryan Drapeau: Battling Fraud with ML at Stripe [not-audio_url] [/not-audio_url]

Duration: 1:06:31
In episode 82 of The Gradient Podcast, Daniel Bashir speaks to Ryan Drapeau.Ryan is a Staff Software Engineer at Stripe and technical lead for Stripe’s Payment Fraud organization, which uses machine learning to help prev…
Shiv Rao: Enabling Better Patient Care with AI [not-audio_url] [/not-audio_url]

Duration: 1:00:51
In episode 81 of The Gradient Podcast, Daniel Bashir speaks to Shiv Rao.Shiv Rao, MD is the co-founder and CEO of Abridge, a healthcare conversation company that uses cutting-edge NLP and generative AI to bring context a…
Jeremie Harris: Realistic Alignment and AI Policy [not-audio_url] [/not-audio_url]

Duration: 1:30:35
In episode 79 of The Gradient Podcast, Daniel Bashir speaks to Jeremie Harris.Jeremie is co-founder of Gladstone AI, author of the book Quantum Physics Made Me Do It, and co-host of the Last Week in AI Podcast. Jeremy pr…
Antoine Blondeau: Alpha Intelligence Capital and Investing in AI [not-audio_url] [/not-audio_url]

Duration: 59:34
In episode 78 of The Gradient Podcast, Daniel Bashir speaks to Antoine Blondeau.Antoine is a serial AI entrepreneur and Co-Founder and Managing Partner of Alpha Intelligence Capital. He was chief executive at Dejima when…