Jacob Andreas: Language, Grounding, and World Models

Jacob Andreas: Language, Grounding, and World Models

Author: Daniel Bashir October 10, 2024 Duration: 1:52:43

Episode 140

I spoke with Professor Jacob Andreas about:

* Language and the world

* World models

* How he’s developed as a scientist

Enjoy!

Jacob is an associate professor at MIT in the Department of Electrical Engineering and Computer Science as well as the Computer Science and Artificial Intelligence Laboratory. His research aims to understand the computational foundations of language learning, and to build intelligent systems that can learn from human guidance. Jacob earned his Ph.D. from UC Berkeley, his M.Phil. from Cambridge (where he studied as a Churchill scholar) and his B.S. from Columbia. He has received a Sloan fellowship, an NSF CAREER award, MIT's Junior Bose and Kolokotrones teaching awards, and paper awards at ACL, ICML and NAACL.

Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.

Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (00:40) Jacob’s relationship with grounding fundamentalism

* (05:21) Jacob’s reaction to LLMs

* (11:24) Grounding language — is there a philosophical problem?

* (15:54) Grounding and language modeling

* (24:00) Analogies between humans and LMs

* (30:46) Grounding language with points and paths in continuous spaces

* (32:00) Neo-Davidsonian formal semantics

* (36:27) Evolving assumptions about structure prediction

* (40:14) Segmentation and event structure

* (42:33) How much do word embeddings encode about syntax?

* (43:10) Jacob’s process for studying scientific questions

* (45:38) Experiments and hypotheses

* (53:01) Calibrating assumptions as a researcher

* (54:08) Flexibility in research

* (56:09) Measuring Compositionality in Representation Learning

* (56:50) Developing an independent research agenda and developing a lab culture

* (1:03:25) Language Models as Agent Models

* (1:04:30) Background

* (1:08:33) Toy experiments and interpretability research

* (1:13:30) Developing effective toy experiments

* (1:15:25) Language Models, World Models, and Human Model-Building

* (1:15:56) OthelloGPT’s bag of heuristics and multiple “world models”

* (1:21:32) What is a world model?

* (1:23:45) The Big Question — from meaning to world models

* (1:28:21) From “meaning” to precise questions about LMs

* (1:32:01) Mechanistic interpretability and reading tea leaves

* (1:35:38) Language and the world

* (1:38:07) Towards better language models

* (1:43:45) Model editing

* (1:45:50) On academia’s role in NLP research

* (1:49:13) On good science

* (1:52:36) Outro

Links:

* Jacob’s homepage and Twitter

* Language Models, World Models, and Human Model-Building

* Papers

* Semantic Parsing as Machine Translation (2013)

* Grounding language with points and paths in continuous spaces (2014)

* How much do word embeddings encode about syntax? (2014)

* Translating neuralese (2017)

* Analogs of linguistic structure in deep representations (2017)

* Learning with latent language (2018)

* Learning from Language (2018)

* Measuring Compositionality in Representation Learning (2019)

* Experience grounds language (2020)

* Language Models as Agent Models (2022)



Get full access to The Gradient at thegradientpub.substack.com/subscribe

Hosted by Daniel Bashir, The Gradient: Perspectives on AI moves beyond surface-level headlines to explore the intricate machinery and human ideas shaping artificial intelligence. Each episode is built on a foundation of deep research, leading to conversations that are both technically substantive and broadly accessible. You'll hear from researchers, engineers, and philosophers who are actively building and critiquing our technological future, discussing not just how AI systems work, but the larger implications of their integration into society. This isn't about speculative hype; it's a grounded examination of real progress, persistent challenges, and ethical considerations from those on the front lines. The discussions peel back layers on topics like model architecture, policy, and the fundamental science behind the algorithms becoming part of our daily lives. For anyone curious about the substance behind the buzz-whether you have a technical background or are simply keen to understand a defining technology of our age-this podcast offers a crucial and thoughtful resource. Tune in for a consistently detailed and nuanced take that treats artificial intelligence with the complexity it deserves.
Author: Language: English Episodes: 100

The Gradient: Perspectives on AI
Podcast Episodes
Daniel Situnayake: AI on the Edge [not-audio_url] [/not-audio_url]

Duration: 1:58:07
In episode 67 of The Gradient Podcast, Daniel Bashir speaks to Daniel Situnayake. Daniel is head of Machine Learning at Edge Impulse. He is co-author of the O’Reilly books "AI at the Edge" and "TinyML". Previously, he’s…
Soumith Chintala: PyTorch [not-audio_url] [/not-audio_url]

Duration: 1:08:20
In episode 66 of The Gradient Podcast, Daniel Bashir speaks to Soumith Chintala.Soumith is a Research Engineer at Meta AI Research in NYC. He is the co-creator and lead of Pytorch, and maintains a number of other open-so…
Sewon Min: The Science of Natural Language [not-audio_url] [/not-audio_url]

Duration: 1:42:44
In episode 65 of The Gradient Podcast, Daniel Bashir speaks to Sewon Min.Sewon is a fifth-year PhD student in the NLP group at the University of Washington, advised by Hannaneh Hajishirzi and Luke Zettlemoyer. She is a p…
Richard Socher: Re-Imagining Search [not-audio_url] [/not-audio_url]

Duration: 1:37:49
In episode 64 of The Gradient Podcast, Daniel Bashir speaks to Richard Socher.Richard is founder and CEO of you.com, a new search engine that lets you personalize your search workflow and eschews tracking and invasive ad…
Joe Edelman: Meaning-Aligned AI [not-audio_url] [/not-audio_url]

Duration: 1:06:23
In episode 63 of The Gradient Podcast, Daniel Bashir speaks to Joe Edelman.Joe developed the meaning-based organizational metrics at Couchsurfing.com, then co-founded the Center for Humane Technology with Tristan Harris,…
Ed Grefenstette: Language, Semantics, Cohere [not-audio_url] [/not-audio_url]

Duration: 1:14:16
In episode 62 of The Gradient Podcast, Daniel Bashir speaks to Ed Grefenstette.Ed is Head of Machine Learning at Cohere and an Honorary Professor at University College London. He previously held research scientist positi…
Ken Liu: What Science Fiction Can Teach Us [not-audio_url] [/not-audio_url]

Duration: 2:02:40
In episode 61 of The Gradient Podcast, Daniel Bashir speaks to Ken Liu.Ken is an author of speculative fiction. A winner of the Nebula, Hugo, and World Fantasy awards, he is the author of silkpunk epic fantasy series Dan…
Hattie Zhou: Lottery Tickets and Algorithmic Reasoning in LLMs [not-audio_url] [/not-audio_url]

Duration: 1:42:59
In episode 60 of The Gradient Podcast, Daniel Bashir speaks to Hattie Zhou.Hattie is a PhD student at the Université de Montréal and Mila. Her research focuses on understanding how and why neural networks work, based on…
Steve Miller: Will AI Take Your Job? It's Not So Simple. [not-audio_url] [/not-audio_url]

Duration: 1:10:25
In episode 58 of The Gradient Podcast, Daniel Bashir speaks to Professor Steve Miller.Steve is a Professor Emeritus of Information Systems at Singapore Management University. Steve served as Founding Dean for the SMU Sch…