Hugo Larochelle: Deep Learning as Science

Hugo Larochelle: Deep Learning as Science

Author: Daniel Bashir July 6, 2023 Duration: 1:48:28

In episode 80 of The Gradient Podcast, Daniel Bashir speaks to Professor Hugo Larochelle.

Professor Larochelle leads the Montreal Google DeepMind team and is adjunct professor at Université de Montréal and a Canada CIFAR Chair. His research focuses on the study and development of deep learning algorithms.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (01:38) Prof. Larochelle’s background, working in Bengio’s lab

* (04:53) Prof. Larochelle’s work and connectionism

* (08:20) 2004-2009, work with Bengio

* (08:40) Nonlocal Estimation of Manifold Structure, manifolds and deep learning

* (13:58) Manifold learning in vision and language

* (16:00) Relationship to Denoising Autoencoders and greedy layer-wise pretraining

* (21:00) From input copying to learning about local distribution structure

* (22:30) Zero-Data Learning of New Tasks

* (22:45) The phrase “extend machine learning towards AI” and terminology

* (26:55) Prescient hints of prompt engineering

* (29:10) Daniel goes on totally unnecessary tangent

* (30:00) Methods for training deep networks (strategies and robust interdependent codes)

* (33:45) Motivations for layer-wise pretraining

* (35:15) Robust Interdependent Codes and interactions between neurons in a single network layer

* (39:00) 2009-2011, postdoc in Geoff Hinton’s lab

* (40:00) Reflections on the AlexNet moment

* (41:45) Frustration with methods for evaluating unsupervised methods, NADE

* (44:45) How researchers thought about representation learning, toying with objectives instead of architectures

* (47:40) The Restricted Boltzmann Forest

* (50:45) Imposing structure for tractable learning of distributions

* (53:11) 2011-2016 at U Sherbooke (and Twitter)

* (53:45) How Prof. Larochelle approached research problems

* (56:00) How Domain Adversarial Networks came about

* (57:12) Can we still learn from Restricted Boltzmann Machines?

* (1:02:20) The ~ Infinite ~ Restricted Boltzmann Machine

* (1:06:55) The need for researchers doing different sorts of work

* (1:08:58) 2017-present, at MILA (and Google)

* (1:09:30) Modulating Early Visual Processing by Language, neuroscientific inspiration

* (1:13:22) Representation learning and generalization, what is a good representation (Meta-Dataset, Universal representation transformer layer, universal template, Head2Toe)

* (1:15:10) Meta-Dataset motivation

* (1:18:00) Shifting focus to the problem—good practices for “recycling deep learning”

* (1:19:15) Head2Toe intuitions

* (1:21:40) What are “universal representations” and manifold perspective on datasets, what is the right pretraining dataset

* (1:26:02) Prof. Larochelle’s takeaways from Fortuitous Forgetting in Connectionist Networks (led by Hattie Zhou)

* (1:32:15) Obligatory commentary on The Present Moment and current directions in ML

* (1:36:18) The creation and motivations of the TMLR journal

* (1:41:48) Prof. Larochelle’s takeaways about doing good science, building research groups, and nurturing a research environment

* (1:44:05) Prof. Larochelle’s advice for aspiring researchers today

* (1:47:41) Outro

Links:

* Professor Larochelle’s homepage and Twitter

* Transactions on Machine Learning Research

* Papers

* 2004-2009

* Nonlocal Estimation of Manifold Structure

* Classification using Discriminative Restricted Boltzmann Machines

* Zero-data learning of new tasks

* Exploring Strategies for Training Deep Neural Networks

* Deep Learning using Robust Interdependent Codes

* 2009-2011

* Stacked Denoising Autoencoders

* Tractable multivariate binary density estimation and the restricted Boltzmann forest

* The Neural Autoregressive Distribution Estimator

* Learning Attentional Policies for Tracking and Recognition in Video with Deep Networks

* 2011-2016

* Practical Bayesian Optimization of Machine Learning Algorithms

* Learning Algorithms for the Classification Restricted Boltzmann Machine

* A neural autoregressive topic model

* Domain-Adversarial Training of Neural Networks

* NADE

* An Infinite Restricted Boltzmann Machine

* 2017-present

* Modulating early visual processing by language

* Meta-Dataset

* A Universal Representation Transformer Layer for Few-Shot Image Classification

* Learning a universal template for few-shot dataset generalization

* Impact of aliasing on generalization in deep convolutional networks

* Head2Toe: Utilizing Intermediate Representations for Better Transfer Learning

* Fortuitous Forgetting in Connectionist Networks



Get full access to The Gradient at thegradientpub.substack.com/subscribe

Hosted by Daniel Bashir, The Gradient: Perspectives on AI moves beyond surface-level headlines to explore the intricate machinery and human ideas shaping artificial intelligence. Each episode is built on a foundation of deep research, leading to conversations that are both technically substantive and broadly accessible. You'll hear from researchers, engineers, and philosophers who are actively building and critiquing our technological future, discussing not just how AI systems work, but the larger implications of their integration into society. This isn't about speculative hype; it's a grounded examination of real progress, persistent challenges, and ethical considerations from those on the front lines. The discussions peel back layers on topics like model architecture, policy, and the fundamental science behind the algorithms becoming part of our daily lives. For anyone curious about the substance behind the buzz-whether you have a technical background or are simply keen to understand a defining technology of our age-this podcast offers a crucial and thoughtful resource. Tune in for a consistently detailed and nuanced take that treats artificial intelligence with the complexity it deserves.
Author: Language: English Episodes: 100

The Gradient: Perspectives on AI
Podcast Episodes
Laurence Liew: AI Singapore [not-audio_url] [/not-audio_url]

Duration: 50:28
In episode 98 of The Gradient Podcast, Daniel Bashir speaks to Laurence Liew.Laurence is the Director for AI Innovation at AI Singapore. He is driving the adoption of AI by the Singapore ecosystem through the 100 Experim…
Michael Levin & Adam Goldstein: Intelligence and its Many Scales [not-audio_url] [/not-audio_url]

Duration: 57:21
In episode 97 of The Gradient Podcast, Daniel Bashir speaks to Professor Michael Levin and Adam Goldstein. Professor Levin is a Distinguished Professor and Vannevar Bush Chair in the Biology Department at Tufts Universit…
Jonathan Frankle: From Lottery Tickets to LLMs [not-audio_url] [/not-audio_url]

Duration: 1:08:22
In episode 96 of The Gradient Podcast, Daniel Bashir speaks to Jonathan Frankle.Jonathan is the Chief Scientist at MosaicML and (as of release). Jonathan completed his PhD at MIT, where he investigated the properties of…
Nao Tokui: "Surfing" Musical Creativity with AI [not-audio_url] [/not-audio_url]

Duration: 1:02:19
In episode 95 of The Gradient Podcast, Daniel Bashir speaks to Nao Tokui.Nao Tokui is an artist/DJ and researcher based in Tokyo. While pursuing his Ph.D. at The University of Tokyo, he produced his first music album and…
Divyansh Kaushik: The Realities of AI Policy [not-audio_url] [/not-audio_url]

Duration: 1:17:44
In episode 94 of The Gradient Podcast, Daniel Bashir speaks to Divyansh Kaushik.Divyansh is the Associate Director for Emerging Technologies and National Security at the Federation of American Scientists where his focus…
Tal Linzen: Psycholinguistics and Language Modeling [not-audio_url] [/not-audio_url]

Duration: 1:14:50
In episode 93 of The Gradient Podcast, Daniel Bashir speaks to Professor Tal Linzen.Professor Linzen is an Associate Professor of Linguistics and Data Science at New York University and a Research Scientist at Google. He…
Kevin K. Yang: Engineering Proteins with ML [not-audio_url] [/not-audio_url]

Duration: 1:00:00
In episode 92 of The Gradient Podcast, Daniel Bashir speaks to Kevin K. Yang.Kevin is a senior researcher at Microsoft Research (MSR) who works on problems at the intersection of machine learning and biology, with an emp…
Miles Grimshaw: Benchmark, LangChain, and Investing in AI [not-audio_url] [/not-audio_url]

Duration: 1:00:47
In episode 90 of The Gradient Podcast, Daniel Bashir speaks to Miles Grimshaw.Miles is General Partner at Benchmark. He was previously a General Partner at Thrive Capital, where he helped the firm raise its fourth and fi…
Shreya Shankar: Machine Learning in the Real World [not-audio_url] [/not-audio_url]

Duration: 1:16:36
In episode 89 of The Gradient Podcast, Daniel Bashir speaks to Shreya Shankar.Shreya is a computer scientist pursuing her PhD in databases at UC Berkeley. Her research interest is in building end-to-end systems for peopl…