Harvey Lederman: Propositional Attitudes and Reference in Language Models

Harvey Lederman: Propositional Attitudes and Reference in Language Models

Author: Daniel Bashir January 11, 2024 Duration: 2:10:34

In episode 106 of The Gradient Podcast, Daniel Bashir speaks to Professor Harvey Lederman.

Professor Lederman is a professor of philosophy at UT Austin. He has broad interests in contemporary philosophy and in the history of philosophy: his areas of specialty include philosophical logic, the Ming dynasty philosopher Wang Yangming, epistemology, and philosophy of language. He has recently been working on incomplete preferences, on trying in the philosophy of language, and on Wang Yangming’s moral metaphysics.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (02:15) Harvey’s background

* (05:30) Higher-order metaphysics and propositional attitudes

* (06:25) Motivations

* (12:25) Setup: syntactic types and ontological categories

* (25:11) What makes higher-order languages meaningful and not vague?

* (25:57) Higher-order languages corresponding to the world

* (30:52) Extreme vagueness

* (35:32) Desirable features of languages and important questions in philosophy

* (36:42) Higher-order identity

* (40:32) Intuitions about mental content, language, context-sensitivity

* (50:42) Perspectivism

* (51:32) Co-referring names, identity statements

* (55:42) The paper’s approach, “know” as context-sensitive

* (57:24) Propositional attitude psychology and mentalese generalizations

* (59:57) The “good standing” of theorizing about propositional attitudes

* (1:02:22) Mentalese

* (1:03:32) “Does knowledge imply belief?” — when a question does not have good standing

* (1:06:17) Sense, Reference, and Substitution

* (1:07:07) Fregeans and the principle of Substitution

* (1:12:12) Follow-up work to this paper

* (1:13:39) Do Language Models Produce Reference Like Libraries or Like Librarians?

* (1:15:02) Bibliotechnism

* (1:19:08) Inscriptions and reference, what it takes for something to refer

* (1:22:37) Derivative and basic reference

* (1:24:47) Intuition: n-gram models and reference

* (1:28:22) Meaningfulness in sentences produced by n-gram models

* (1:30:40) Bibliotechnism and LLMs, disanalogies to n-grams

* (1:33:17) On other recent work (vector grounding, do LMs refer?, etc.)

* (1:40:12) Causal connections and reference, how bibliotechnism makes good on the meanings of sentences

* (1:45:46) RLHF, sensitivity to truth and meaningfulness

* (1:48:47) Intelligibility

* (1:50:52) When LLMs produce novel reference

* (1:53:37) Novel reference vs. find-replace

* (1:56:00) Directionality example

* (1:58:22) Human intentions and derivative reference

* (2:00:47) Between bibliotechnism and agency

* (2:05:32) Where do invented names / novel reference come from?

* (2:07:17) Further questions

* (2:10:04) Outro

Links:

* Harvey’s homepage and Twitter

* Papers discussed

* Higher-order metaphysics and propositional attitudes

* Perspectivism

* Sense, Reference, and Substitution

* Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs



Get full access to The Gradient at thegradientpub.substack.com/subscribe

Hosted by Daniel Bashir, The Gradient: Perspectives on AI moves beyond surface-level headlines to explore the intricate machinery and human ideas shaping artificial intelligence. Each episode is built on a foundation of deep research, leading to conversations that are both technically substantive and broadly accessible. You'll hear from researchers, engineers, and philosophers who are actively building and critiquing our technological future, discussing not just how AI systems work, but the larger implications of their integration into society. This isn't about speculative hype; it's a grounded examination of real progress, persistent challenges, and ethical considerations from those on the front lines. The discussions peel back layers on topics like model architecture, policy, and the fundamental science behind the algorithms becoming part of our daily lives. For anyone curious about the substance behind the buzz-whether you have a technical background or are simply keen to understand a defining technology of our age-this podcast offers a crucial and thoughtful resource. Tune in for a consistently detailed and nuanced take that treats artificial intelligence with the complexity it deserves.
Author: Language: English Episodes: 100

The Gradient: Perspectives on AI
Podcast Episodes
Judy Fan: Reverse Engineering the Human Cognitive Toolkit [not-audio_url] [/not-audio_url]

Duration: 1:32:39
Episode 136I spoke with Judy Fan about:* Our use of physical artifacts for sensemaking* Why cognitive tools can be a double-edged sword* Her approach to scientific inquiry and how that approach has developedEnjoy—and let…
L.M. Sacasas: The Questions Concerning Technology [not-audio_url] [/not-audio_url]

Duration: 1:47:20
Episode 135I spoke with L. M. Sacasas about:* His writing and intellectual influences* The value of asking hard questions about technology and our relationship to it* What happens when we decide to outsource skills and c…
Pete Wolfendale: The Revenge of Reason [not-audio_url] [/not-audio_url]

Duration: 2:52:57
Episode 134I spoke with Pete Wolfendale about:* The flaws in longtermist thinking* Selections from his new book, The Revenge of Reason* Metaphysics* What philosophy has to say about reason and AIEnjoy—and let me know wha…
Peter Lee: Computing Theory and Practice, and GPT-4's Impact [not-audio_url] [/not-audio_url]

Duration: 1:01:48
Episode 133I spoke with Peter Lee about:* His early work on compiler generation, metacircularity, and type theory* Paradoxical problems* GPT-4s impact, Microsoft’s “Sparks of AGI” paper, and responses and criticismEnjoy—…
Manuel & Lenore Blum: The Conscious Turing Machine [not-audio_url] [/not-audio_url]

Duration: 2:23:04
Episode 132I spoke with Manuel and Lenore Blum about:* Their early influences and mentors* The Conscious Turing Machine and what theoretical computer science can tell us about consciousnessEnjoy—and let me know what you…
Kevin Dorst: Against Irrationalist Narratives [not-audio_url] [/not-audio_url]

Duration: 2:15:21
Episode 131I spoke with Professor Kevin Dorst about:* Subjective Bayesianism and epistemology foundations* What happens when you’re uncertain about your evidence* Why it’s rational for people to polarize on political mat…
David Pfau: Manifold Factorization and AI for Science [not-audio_url] [/not-audio_url]

Duration: 2:00:52
Episode 130I spoke with David Pfau about:* Spectral learning and ML* Learning to disentangle manifolds and (projective) representation theory* Deep learning for computational quantum mechanics* Picking and pursuing resea…
Sergiy Nesterenko: Automating Circuit Board Design [not-audio_url] [/not-audio_url]

Duration: 1:03:35
Episode 128I spoke with Sergiy Nesterenko about:* Developing an automated system for designing PCBs* Difficulties in human and automated PCB design* Building a startup at the intersection of different areas of expertiseB…