David Pfau: Manifold Factorization and AI for Science

David Pfau: Manifold Factorization and AI for Science

Author: Daniel Bashir July 11, 2024 Duration: 2:00:52

Episode 130

I spoke with David Pfau about:

* Spectral learning and ML

* Learning to disentangle manifolds and (projective) representation theory

* Deep learning for computational quantum mechanics

* Picking and pursuing research problems and directions

David’s work is really (times k for some very large value of k) interesting—I’ve been inspired to descend a number of rabbit holes because of it.

(if you listen to this episode, you might become as cool as this guy)

While I’m at it — I’m still hovering around 40 ratings on Apple Podcasts. It’d mean a lot if you’d consider helping me bump that up!

Enjoy—and let me know what you think!

David is a staff research scientist at Google DeepMind. He is also a visiting professor at Imperial College London in the Department of Physics, where he supervises work on applications of deep learning to computational quantum mechanics. His research interests span artificial intelligence, machine learning and scientific computing.

Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.

I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!

Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (00:52) David Pfau the “critic”

* (02:05) Scientific applications of deep learning — David’s interests

* (04:57) Brain / neural network analogies

* (09:40) Modern ML systems and theories of the brain

* (14:19) Desirable properties of theories

* (18:07) Spectral Inference Networks

* (19:15) Connections to FermiNet / computational physics, a series of papers

* (33:52) Deep slow feature analysis — interpretability and findings on eigenfunctions

* (39:07) Following up on eigenfunctions (there are indeed only so many hours in a day; I have been asking the Substack people if they can ship 40-hour days, but I don’t think they’ve gotten to it yet)

* (42:17) Power iteration and intuitions

* (45:23) Projective representation theory

* (46:00) ???

* (46:54) Geomancer and learning to decompose a manifold from data

* (47:45) we consider the question of whether you will spend 90 more minutes of this podcast episode (there are not 90 more minutes left in this podcast episode, but there could have been)

* (1:08:47) Learning embeddings

* (1:11:12) The “unexpected emergent property” of Geomancer

* (1:14:43) Learned embeddings and disentangling and preservation of topology

* n/b I still haven’t managed to do this in colab because I keep crashing my instance when I use s3o4d :(

* (1:21:07) What’s missing from the ~ current (deep learning) paradigm ~

* (1:29:04) LLMs as swiss-army knives

* (1:32:05) RL and human learning — TD learning in the brain

* (1:37:43) Models that cover the Pareto Front (image below)

* (1:46:54) AI accelerators and doubling down on transformers

* (1:48:27) On Slow Research — chasing big questions and what makes problems attractive

* (1:53:50) Future work on Geomancer

* (1:55:35) Finding balance in pursuing interesting and lucrative work

* (2:00:40) Outro

Links:

* Papers

* Natural Quantum Monte Carlo Computation of Excited States (2023)

* Making sense of raw input (2021)

* Integrable Nonparametric Flows (2020)

* Disentangling by Subspace Diffusion (2020)

* Ab initio solution of the many-electron Schrödinger equation with deep neural networks (2020)

* Spectral Inference Networks (2018)

* Connecting GANs and Actor-Critic Methods (2016)

* Learning Structure in Time Series for Neuroscience and Beyond (2015, dissertation)

* Robust learning of low-dimensional dynamics from large neural ensembles (2013)

* Probabilistic Deterministic Infinite Automata (2010)

* Other

* On Slow Research

* “I just want to put this out here so that no one ever says ‘we can just get around the data limitations of LLMs with self-play’ ever again.”



Get full access to The Gradient at thegradientpub.substack.com/subscribe

Hosted by Daniel Bashir, The Gradient: Perspectives on AI moves beyond surface-level headlines to explore the intricate machinery and human ideas shaping artificial intelligence. Each episode is built on a foundation of deep research, leading to conversations that are both technically substantive and broadly accessible. You'll hear from researchers, engineers, and philosophers who are actively building and critiquing our technological future, discussing not just how AI systems work, but the larger implications of their integration into society. This isn't about speculative hype; it's a grounded examination of real progress, persistent challenges, and ethical considerations from those on the front lines. The discussions peel back layers on topics like model architecture, policy, and the fundamental science behind the algorithms becoming part of our daily lives. For anyone curious about the substance behind the buzz-whether you have a technical background or are simply keen to understand a defining technology of our age-this podcast offers a crucial and thoughtful resource. Tune in for a consistently detailed and nuanced take that treats artificial intelligence with the complexity it deserves.
Author: Language: English Episodes: 100

The Gradient: Perspectives on AI
Podcast Episodes
Vivek Natarajan: Towards Biomedical AI [not-audio_url] [/not-audio_url]

Duration: 1:55:03
Episode 126I spoke with Vivek Natarajan about:* Improving access to medical knowledge with AI* How an LLM for medicine should behave* Aspects of training Med-PaLM and AMIE* How to facilitate appropriate amounts of trust…
Thomas Mullaney: A Global History of the Information Age [not-audio_url] [/not-audio_url]

Duration: 1:43:45
Episode 125False universalism freaks me out. It doesn’t freak me out as a first principle because of epistemic violence; it freaks me out because it works. I spoke with Professor Thomas Mullaney about:* Telling stories a…
Seth Lazar: Normative Philosophy of Computing [not-audio_url] [/not-audio_url]

Duration: 1:50:17
Episode 124You may think you’re doing a priori reasoning, but actually you’re just over-generalizing from your current experience of technology.I spoke with Professor Seth Lazar about:* Why managing near-term and long-te…
Suhail Doshi: The Future of Computer Vision [not-audio_url] [/not-audio_url]

Duration: 1:08:07
Episode 123I spoke with Suhail Doshi about:* Why benchmarks aren’t prepared for tomorrow’s AI models* How he thinks about artists in a world with advanced AI tools* Building a unified computer vision model that can gener…
Azeem Azhar: The Exponential View [not-audio_url] [/not-audio_url]

Duration: 1:46:25
Episode 122I spoke with Azeem Azhar about:* The speed of progress in AI* Historical context for some of the terminology we use and how we think about technology* What we might want our future to look likeAzeem is an entr…
David Thorstad: Bounded Rationality and the Case Against Longtermism [not-audio_url] [/not-audio_url]

Duration: 2:19:02
Episode 122I spoke with Professor David Thorstad about:* The practical difficulties of doing interdisciplinary work* Why theories of human rationality should account for boundedness, heuristics, and other cognitive limit…
Michael Sipser: Problems in the Theory of Computation [not-audio_url] [/not-audio_url]

Duration: 1:28:21
In episode 119 of The Gradient Podcast, Daniel Bashir speaks to Professor Michael Sipser.Professor Sipser is the Donner Professor of Mathematics and member of the Computer Science and Artificial Intelligence Laboratory a…
Andrew Lee: How AI will Shape the Future of Email [not-audio_url] [/not-audio_url]

Duration: 1:03:40
In episode 118 of The Gradient Podcast, Daniel Bashir speaks to Andrew Lee.Andrew is co-founder and CEO of Shortwave, a company dedicated to building a better product experience for email, particularly by leveraging AI.…