Cameron Jones & Sean Trott: Understanding, Grounding, and Reference in LLMs

Cameron Jones & Sean Trott: Understanding, Grounding, and Reference in LLMs

Author: Daniel Bashir February 22, 2024 Duration: 1:59:26

In episode 112 of The Gradient Podcast, Daniel Bashir speaks to Cameron Jones and Sean Trott.

Cameron is a PhD candidate in the Cognitive Science Department at the University of California, San Diego. His research compares how humans and large language models process language about world knowledge, situation models, and theory of mind.

Sean is an Assistant Teaching Professor in the Cognitive Science Department at the University of California, San Diego. His research interests include probing large language models, ambiguity in languages, how ambiguous words are represented, and pragmatic inference. He previously completed his PhD at UCSD.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (02:55) Cameron’s background

* (06:00) Sean’s background

* (08:15) Unexpected capabilities of language models and the need for embodiment to understand meaning

* (11:05) Interpreting results of Turing tests, separating what humans and LLMs do when behaving as though they “understand”

* (14:27) Internal mechanisms, interpretability, how we test theories

* (16:40) Languages are efficient, but for whom?

* (17:30) Initial motivations: lexical ambiguity

* (19:20) The balance of meanings across wordforms

* (22:35) Tension between speaker- and comprehender-oriented pressures in lexical ambiguity

* (25:05) Context and potential vs. realized ambiguity

* (27:15) LLM-ology

* (28:30) Studying LLMs as models of human cognition and as interesting objects of study in their own right

* (30:03) Example of explaining away effects

* (33:54) The internalist account of belief sensitivity—behavior and internal representations

* (37:43) LLMs and the False Belief Task

* (42:05) Hypothetical on observed behavior and inferences about internal representations

* (48:05) Distributional Semantics Still Can’t Account for Affordances

* (50:25) Tests of embodied theories and limitations of distributional cues

* (53:54) Multimodal models and object affordances

* (58:30) Language and grounding, other buzzwords

* (59:45) How could we know if LLMs understand language?

* (1:04:50) Reference: as a thing words do vs. ontological notion

* (1:11:38) The Role of Physical Inference in Pronoun Resolution

* (1:16:40) World models and world knowledge

* (1:19:45) EPITOME

* (1:20:20) The different tasks

* (1:26:43) Confounders / “attending” in LM performance on tasks

* (1:30:30) Another hypothetical, on theory of mind

* (1:32:26) How much information can language provide in service of mentalizing?

* (1:35:14) Convergent validity and coherence/validity of theory of mind

* (1:39:30) Interpretive questions about behavior w/r/t/ theory of mind

* (1:43:35) Does GPT-4 Pass the Turing Test?

* (1:44:00) History of the Turing Test

* (1:47:05) Interrogator strategies and the strength of the Turing Test

* (1:52:15) “Internal life” and personality

* (1:53:30) How should this research impact how we assess / think about LLM abilities?

* (1:58:56) Outro

Links:

* Cameron’s homepage and Twitter

* Sean’s homepage and Twitter

* Research — Language and NLP

* Languages are efficient, but for whom?

* Research — LLM-ology

* Do LLMs know what humans know?

* Distributional Semantics Still Can’t Account for Affordances

* In Cautious Defense of LLM-ology

* Should Psycholinguists use LLMs as “model organisms”?

* (Re)construing Meaning in NLP

* Research — language and grounding, theory of mind, reference [insert other buzzwords here]

* Do LLMs have a “theory of mind”?

* How could we know if LLMs understand language?

* Does GPT-4 Pass the Turing Test?

* Could LMs change language?

* The extended mind and why it matters for cognitive science research

* EPITOME

* The Role of Physical Inference in Pronoun Resolution



Get full access to The Gradient at thegradientpub.substack.com/subscribe

Hosted by Daniel Bashir, The Gradient: Perspectives on AI moves beyond surface-level headlines to explore the intricate machinery and human ideas shaping artificial intelligence. Each episode is built on a foundation of deep research, leading to conversations that are both technically substantive and broadly accessible. You'll hear from researchers, engineers, and philosophers who are actively building and critiquing our technological future, discussing not just how AI systems work, but the larger implications of their integration into society. This isn't about speculative hype; it's a grounded examination of real progress, persistent challenges, and ethical considerations from those on the front lines. The discussions peel back layers on topics like model architecture, policy, and the fundamental science behind the algorithms becoming part of our daily lives. For anyone curious about the substance behind the buzz-whether you have a technical background or are simply keen to understand a defining technology of our age-this podcast offers a crucial and thoughtful resource. Tune in for a consistently detailed and nuanced take that treats artificial intelligence with the complexity it deserves.
Author: Language: English Episodes: 100

The Gradient: Perspectives on AI
Podcast Episodes
Linus Lee: At the Boundary of Machine and Mind [not-audio_url] [/not-audio_url]

Duration: 2:28:46
In episode 56 of The Gradient Podcast, Daniel Bashir speaks to Linus Lee. Linus is an independent researcher interested in the future of knowledge representation and creative work aided by machine understanding of langua…
Suresh Venkatasubramanian: An AI Bill of Rights [not-audio_url] [/not-audio_url]

Duration: 1:40:58
In episode 55 of The Gradient Podcast, Daniel Bashir speaks to Professor Suresh Venkatasubramanian. Professor Venkatasubramanian is a Professor of Computer Science and Data Science at Brown University, where his research…
Melanie Mitchell: Abstraction and Analogy in AI [not-audio_url] [/not-audio_url]

Duration: 54:47
Have suggestions for future podcast guests (or other feedback)? Let us know here!In episode 53 of The Gradient Podcast, Daniel Bashir speaks to Professor Melanie Mitchell. Professor Mitchell is the Davis Professor at the…
Marc Bellemare: Distributional Reinforcement Learning [not-audio_url] [/not-audio_url]

Duration: 1:12:22
Have suggestions for future podcast guests (or other feedback)? Let us know here!In episode 52 of The Gradient Podcast, Daniel Bashir speaks to Professor Marc Bellemare. Professor Bellemare leads the reinforcement learni…
François Chollet: Keras and Measures of Intelligence [not-audio_url] [/not-audio_url]

Duration: 1:28:50
In episode 51 of The Gradient Podcast, Daniel Bashir speaks to François Chollet.François is a Senior Staff Software Engineer at Google and creator of the Keras deep learning library, which has enabled many people (includ…
Yoshua Bengio: The Past, Present, and Future of Deep Learning [not-audio_url] [/not-audio_url]

Duration: 1:14:09
Happy episode 50! This week’s episode is being released on Monday to avoid Thanksgiving. Have suggestions for future podcast guests (or other feedback)? Let us know here!In episode 50 of The Gradient Podcast, Daniel Bash…
Kanjun Qiu and Josh Albrecht: Generally Intelligent [not-audio_url] [/not-audio_url]

Duration: 47:21
In episode 49 of The Gradient Podcast, Daniel Bashir speaks to Kanjun Qiu and Josh Albrecht. Kanjun and Josh are CEO and CTO of Generally Intelligent, an AI startup aiming to develop general-purpose agents with human-lik…

«1...678910