Thomas Dietterich: From the Foundations

Thomas Dietterich: From the Foundations

Author: Daniel Bashir November 30, 2023 Duration: 2:01:57

In episode 100 of The Gradient Podcast, Daniel Bashir speaks to Professor Thomas Dietterich.

Professor Dietterich is Distinguished Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon State University. He is a pioneer in the field of machine learning, and has authored more than 225 refereed publications and two books. His current research topics include robust artificial intelligence, robust human-AI systems, and applications in sustainability. He is a former President of the Association for the Advancement of Artificial Intelligence, and the founding President of the International Machine Learning Society. Other major roles include Executive Editor of the journal Machine Learning, co-founder of the Journal for Machine Learning Research, and program chair of AAAI 1990 and NIPS 2000. He currently serves as one of the moderators for the cs.LG category on arXiv.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Episode 100 Note

* (02:03) Intro

* (04:23) Prof. Dietterich’s background

* (14:20) Kuhn and theory development in AI, how Prof Dietterich thinks about the philosophy of science and AI

* (20:10) Scales of understanding and sentience, grounding, observable evidence

* (23:58) Limits of statistical learning without causal reasoning, systematic understanding

* (25:48) A challenge for the ML community: testing for systematicity

* (26:13) Forming causal understandings of the world

* (28:18) Learning at the Knowledge Level

* (29:18) Background and definitions

* (32:18) Knowledge and goals, a note on LLMs

* (33:03) What it means to learn

* (41:05) LLMs as learning results of inference without learning first principles

* (43:25) System I/II thinking in humans and LLMs

* (47:23) “Routine Science”

* (47:38) Solving multiclass learning problems via error-correcting output codes

* (52:53) Error-correcting codes and redundancy

* (54:48) Why error-correcting codes work, contra intuition

* (59:18) Bias in ML

* (1:06:23) MAXQ for hierarchical RL

* (1:15:48) Computational sustainability

* (1:19:53) Project TAHMO’s moonshot

* (1:23:28) Anomaly detection for weather stations

* (1:25:33) Robustness

* (1:27:23) Motivating The Familiarity Hypothesis

* (1:27:23) Anomaly detection and self-models of competence

* (1:29:25) Measuring the health of freshwater streams

* (1:31:55) An open set problem in species detection

* (1:33:40) Issues in anomaly detection for deep learning

* (1:37:45) The Familiarity Hypothesis

* (1:40:15) Mathematical intuitions and the Familiarity Hypothesis

* (1:44:12) What’s Wrong with LLMs and What We Should Be Building Instead

* (1:46:20) Flaws in LLMs

* (1:47:25) The systems Prof Dietterich wants to develop

* (1:49:25) Hallucination/confabulation and LLMs vs knowledge bases

* (1:54:00) World knowledge and linguistic knowledge

* (1:55:07) End-to-end learning and knowledge bases

* (1:57:42) Components of an intelligent system and separability

* (1:59:06) Thinking through external memory

* (2:01:10) Outro

Links:

* Research — Fundamentals (Philosophy of AI)

* Learning at the Knowledge Level

* What Does it Mean for a Machine to Understand?

* Research – “Routine science”

* Ensemble methods in ML and error-correcting output codes

* Solving multiclass learning problems via error-correcting output codes

* An experimental comparison of bagging, boosting, and randomization

* ML Bias, Statistical Bias, and Statistical Variance of Decision Tree Algorithms

* The definitive treatment of these questions, by Gareth James

* Discovering/Exploiting structure in MDPs:

* MAXQ for hierarchical RL

* Exogenous State MDPs (paper with George Trimponias, slides)

* Research — Ecosystem Informatics and Computational Sustainability

* Project TAHMO

* Challenges for ML in Computational Sustainability

* Research — Robustness

* Steps towards robust AI (AAAI President’s Address)

* Benchmarking NN Robustness to Common Corruptions and Perturbations with Dan Hendrycks

* The familiarity hypothesis: Explaining the behavior of deep open set methods

* Recent commentary

* Toward High-Reliability AI

* What's Wrong with Large Language Models and What We Should Be Building Instead



Get full access to The Gradient at thegradientpub.substack.com/subscribe

Hosted by Daniel Bashir, The Gradient: Perspectives on AI moves beyond surface-level headlines to explore the intricate machinery and human ideas shaping artificial intelligence. Each episode is built on a foundation of deep research, leading to conversations that are both technically substantive and broadly accessible. You'll hear from researchers, engineers, and philosophers who are actively building and critiquing our technological future, discussing not just how AI systems work, but the larger implications of their integration into society. This isn't about speculative hype; it's a grounded examination of real progress, persistent challenges, and ethical considerations from those on the front lines. The discussions peel back layers on topics like model architecture, policy, and the fundamental science behind the algorithms becoming part of our daily lives. For anyone curious about the substance behind the buzz-whether you have a technical background or are simply keen to understand a defining technology of our age-this podcast offers a crucial and thoughtful resource. Tune in for a consistently detailed and nuanced take that treats artificial intelligence with the complexity it deserves.
Author: Language: English Episodes: 100

The Gradient: Perspectives on AI
Podcast Episodes
Terry Winograd: AI, HCI, Language, and Cognition [not-audio_url] [/not-audio_url]

Duration: 1:33:21
In episode 87 of The Gradient Podcast, Daniel Bashir speaks to Professor Terry Winograd. Professor Winograd is Professor Emeritus of Computer Science at Stanford University. His research focuses on human-computer interac…
Gil Strang: Linear Algebra and Deep Learning [not-audio_url] [/not-audio_url]

Duration: 1:00:36
In episode 86 of The Gradient Podcast, Daniel Bashir speaks to Professor Gil Strang. Professor Strang is one of the world’s foremost mathematics educators and a mathematician with contributions to finite element theory,…
Anant Agarwal: AI for Education [not-audio_url] [/not-audio_url]

Duration: 47:40
In episode 85 of The Gradient Podcast, Andrey Kurenkov speaks to Anant AgarwalAnant Agarwal is the chief platform officer of 2U, and founder of edX. Anant taught the first edX course on circuits and electronics from MIT,…
Peli Grietzer: A Mathematized Philosophy of Literature [not-audio_url] [/not-audio_url]

Duration: 2:33:33
In episode 83 of The Gradient Podcast, Daniel Bashir speaks to Peli Grietzer. Peli is a scholar whose work borrows mathematical ideas from machine learning theory to think through “ambient” and ineffable phenomena like m…
Ryan Drapeau: Battling Fraud with ML at Stripe [not-audio_url] [/not-audio_url]

Duration: 1:06:31
In episode 82 of The Gradient Podcast, Daniel Bashir speaks to Ryan Drapeau.Ryan is a Staff Software Engineer at Stripe and technical lead for Stripe’s Payment Fraud organization, which uses machine learning to help prev…
Shiv Rao: Enabling Better Patient Care with AI [not-audio_url] [/not-audio_url]

Duration: 1:00:51
In episode 81 of The Gradient Podcast, Daniel Bashir speaks to Shiv Rao.Shiv Rao, MD is the co-founder and CEO of Abridge, a healthcare conversation company that uses cutting-edge NLP and generative AI to bring context a…
Hugo Larochelle: Deep Learning as Science [not-audio_url] [/not-audio_url]

Duration: 1:48:28
In episode 80 of The Gradient Podcast, Daniel Bashir speaks to Professor Hugo Larochelle. Professor Larochelle leads the Montreal Google DeepMind team and is adjunct professor at Université de Montréal and a Canada CIFAR…
Jeremie Harris: Realistic Alignment and AI Policy [not-audio_url] [/not-audio_url]

Duration: 1:30:35
In episode 79 of The Gradient Podcast, Daniel Bashir speaks to Jeremie Harris.Jeremie is co-founder of Gladstone AI, author of the book Quantum Physics Made Me Do It, and co-host of the Last Week in AI Podcast. Jeremy pr…
Antoine Blondeau: Alpha Intelligence Capital and Investing in AI [not-audio_url] [/not-audio_url]

Duration: 59:34
In episode 78 of The Gradient Podcast, Daniel Bashir speaks to Antoine Blondeau.Antoine is a serial AI entrepreneur and Co-Founder and Managing Partner of Alpha Intelligence Capital. He was chief executive at Dejima when…