Harvey Lederman: Propositional Attitudes and Reference in Language Models

Harvey Lederman: Propositional Attitudes and Reference in Language Models

Author: Daniel Bashir January 11, 2024 Duration: 2:10:34

In episode 106 of The Gradient Podcast, Daniel Bashir speaks to Professor Harvey Lederman.

Professor Lederman is a professor of philosophy at UT Austin. He has broad interests in contemporary philosophy and in the history of philosophy: his areas of specialty include philosophical logic, the Ming dynasty philosopher Wang Yangming, epistemology, and philosophy of language. He has recently been working on incomplete preferences, on trying in the philosophy of language, and on Wang Yangming’s moral metaphysics.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (02:15) Harvey’s background

* (05:30) Higher-order metaphysics and propositional attitudes

* (06:25) Motivations

* (12:25) Setup: syntactic types and ontological categories

* (25:11) What makes higher-order languages meaningful and not vague?

* (25:57) Higher-order languages corresponding to the world

* (30:52) Extreme vagueness

* (35:32) Desirable features of languages and important questions in philosophy

* (36:42) Higher-order identity

* (40:32) Intuitions about mental content, language, context-sensitivity

* (50:42) Perspectivism

* (51:32) Co-referring names, identity statements

* (55:42) The paper’s approach, “know” as context-sensitive

* (57:24) Propositional attitude psychology and mentalese generalizations

* (59:57) The “good standing” of theorizing about propositional attitudes

* (1:02:22) Mentalese

* (1:03:32) “Does knowledge imply belief?” — when a question does not have good standing

* (1:06:17) Sense, Reference, and Substitution

* (1:07:07) Fregeans and the principle of Substitution

* (1:12:12) Follow-up work to this paper

* (1:13:39) Do Language Models Produce Reference Like Libraries or Like Librarians?

* (1:15:02) Bibliotechnism

* (1:19:08) Inscriptions and reference, what it takes for something to refer

* (1:22:37) Derivative and basic reference

* (1:24:47) Intuition: n-gram models and reference

* (1:28:22) Meaningfulness in sentences produced by n-gram models

* (1:30:40) Bibliotechnism and LLMs, disanalogies to n-grams

* (1:33:17) On other recent work (vector grounding, do LMs refer?, etc.)

* (1:40:12) Causal connections and reference, how bibliotechnism makes good on the meanings of sentences

* (1:45:46) RLHF, sensitivity to truth and meaningfulness

* (1:48:47) Intelligibility

* (1:50:52) When LLMs produce novel reference

* (1:53:37) Novel reference vs. find-replace

* (1:56:00) Directionality example

* (1:58:22) Human intentions and derivative reference

* (2:00:47) Between bibliotechnism and agency

* (2:05:32) Where do invented names / novel reference come from?

* (2:07:17) Further questions

* (2:10:04) Outro

Links:

* Harvey’s homepage and Twitter

* Papers discussed

* Higher-order metaphysics and propositional attitudes

* Perspectivism

* Sense, Reference, and Substitution

* Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs



Get full access to The Gradient at thegradientpub.substack.com/subscribe

Hosted by Daniel Bashir, The Gradient: Perspectives on AI moves beyond surface-level headlines to explore the intricate machinery and human ideas shaping artificial intelligence. Each episode is built on a foundation of deep research, leading to conversations that are both technically substantive and broadly accessible. You'll hear from researchers, engineers, and philosophers who are actively building and critiquing our technological future, discussing not just how AI systems work, but the larger implications of their integration into society. This isn't about speculative hype; it's a grounded examination of real progress, persistent challenges, and ethical considerations from those on the front lines. The discussions peel back layers on topics like model architecture, policy, and the fundamental science behind the algorithms becoming part of our daily lives. For anyone curious about the substance behind the buzz-whether you have a technical background or are simply keen to understand a defining technology of our age-this podcast offers a crucial and thoughtful resource. Tune in for a consistently detailed and nuanced take that treats artificial intelligence with the complexity it deserves.
Author: Language: English Episodes: 100

The Gradient: Perspectives on AI
Podcast Episodes
Ted Gibson: The Structure and Purpose of Language [not-audio_url] [/not-audio_url]

Duration: 2:13:24
In episode 107 of The Gradient Podcast, Daniel Bashir speaks to Professor Ted Gibson.Ted is a Professor of Cognitive Science at MIT. He leads the TedLab, which investigates why languages look the way they do; the relatio…
Eric Jang: AI is Good For You [not-audio_url] [/not-audio_url]

Duration: 1:29:57
In episode 105 of The Gradient Podcast, Daniel Bashir speaks to Eric Jang.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Po…
2023 in AI, with Nathan Benaich [not-audio_url] [/not-audio_url]

Duration: 1:35:37
In episode 104 of The Gradient Podcast, Daniel Bashir speaks to Nathan Benaich.Nathan is Founder and General Partner at Air Street Capital, a VC firm focused on investing in AI-first technology and life sciences companie…
Kathleen Fisher: DARPA and AI for National Security [not-audio_url] [/not-audio_url]

Duration: 46:16
In episode 103 of The Gradient Podcast, Daniel Bashir speaks to Dr. Kathleen Fisher.As the director of DARPA’s Information Innovation Office (I2O), Dr. Kathleen Fisher oversees a portfolio that includes most of the agenc…
Peter Tse: The Neuroscience of Consciousness and Free Will [not-audio_url] [/not-audio_url]

Duration: 2:24:04
In episode 102 of The Gradient Podcast, Daniel Bashir speaks to Peter Tse.Professor Tse is a Professor of Cognitive Neuroscience and chair of the department of Psychological and Brain Sciences at Dartmouth College. His r…
Vera Liao: AI Explainability and Transparency [not-audio_url] [/not-audio_url]

Duration: 1:37:03
In episode 101 of The Gradient Podcast, Daniel Bashir speaks to Vera Liao.Vera is a Principal Researcher at Microsoft Research (MSR) Montréal where she is part of the FATE (Fairness, Accountability, Transparency, and Eth…
Thomas Dietterich: From the Foundations [not-audio_url] [/not-audio_url]

Duration: 2:01:57
In episode 100 of The Gradient Podcast, Daniel Bashir speaks to Professor Thomas Dietterich.Professor Dietterich is Distinguished Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon…
Martin Wattenberg: ML Visualization and Interpretability [not-audio_url] [/not-audio_url]

Duration: 1:42:05
In episode 99 of The Gradient Podcast, Daniel Bashir speaks to Professor Martin Wattenberg.Professor Wattenberg is a professor at Harvard and part-time member of Google Research’s People + AI Research (PAIR) initiative,…
Laurence Liew: AI Singapore [not-audio_url] [/not-audio_url]

Duration: 50:28
In episode 98 of The Gradient Podcast, Daniel Bashir speaks to Laurence Liew.Laurence is the Director for AI Innovation at AI Singapore. He is driving the adoption of AI by the Singapore ecosystem through the 100 Experim…