Tal Linzen: Psycholinguistics and Language Modeling

Tal Linzen: Psycholinguistics and Language Modeling

Author: Daniel Bashir October 5, 2023 Duration: 1:14:50

In episode 93 of The Gradient Podcast, Daniel Bashir speaks to Professor Tal Linzen.

Professor Linzen is an Associate Professor of Linguistics and Data Science at New York University and a Research Scientist at Google. He directs the Computation and Psycholinguistics Lab, where he and his collaborators use behavioral experiments and computational methods to study how people learn and understand language. They also develop methods for evaluating, understanding, and improving computational systems for language processing.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (02:25) Prof. Linzen’s background

* (05:37) Back and forth between psycholinguistics and deep learning research, LM evaluation

* (08:40) How can deep learning successes/failures help us understand human language use, methodological concerns, comparing human representations to LM representations

* (14:22) Behavioral capacities and degrees of freedom in representations

* (16:40) How LMs are becoming less and less like humans

* (19:25) Assessing LSTMs’ ability to learn syntax-sensitive dependencies

* (22:48) Similarities between structure-sensitive dependencies, sophistication of syntactic representations

* (25:30) RNNs implicitly implement tensor-product representations—vector representations of symbolic structures

* (29:45) Representations required to solve certain tasks, difficulty of natural language

* (33:25) Accelerating progress towards human-like linguistic generalization

* (34:30) The pre-training agnostic identically distributed evaluation paradigm

* (39:50) Ways to mitigate differences in evaluation

* (44:20) Surprisal does not explain syntactic disambiguation difficulty

* (45:00) How to measure processing difficulty, predictability and processing difficulty

* (49:20) What other factors influence processing difficulty?

* (53:10) How to plant trees in language models

* (55:45) Architectural influences on generalizing knowledge of linguistic structure

* (58:20) “Cognitively relevant regimes” and speed of generalization

* (1:00:45) Acquisition of syntax and sampling simpler vs. more complex sentences

* (1:04:03) Curriculum learning for progressively more complicated syntax

* (1:05:35) Hypothesizing tree-structured representations

* (1:08:00) Reflecting on a prediction from the past

* (1:10:15) Goals and “the correct direction” in AI research

* (1:14:04) Outro

Links:

* Prof. Linzen’s Twitter and homepage

* Papers

* Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies

* RNNS Implicitly Implement Tensor-Product Representations

* How Can We Accelerate Progress Towards Human-like Linguistic Generalization?

* Surprisal does not explain syntactic disambiguation difficulty: evidence from a large-scale benchmark

* How to Plant Trees in LMs: Data and Architectural Effects on the Emergence of Syntactic Inductive Biases



Get full access to The Gradient at thegradientpub.substack.com/subscribe

Hosted by Daniel Bashir, The Gradient: Perspectives on AI moves beyond surface-level headlines to explore the intricate machinery and human ideas shaping artificial intelligence. Each episode is built on a foundation of deep research, leading to conversations that are both technically substantive and broadly accessible. You'll hear from researchers, engineers, and philosophers who are actively building and critiquing our technological future, discussing not just how AI systems work, but the larger implications of their integration into society. This isn't about speculative hype; it's a grounded examination of real progress, persistent challenges, and ethical considerations from those on the front lines. The discussions peel back layers on topics like model architecture, policy, and the fundamental science behind the algorithms becoming part of our daily lives. For anyone curious about the substance behind the buzz-whether you have a technical background or are simply keen to understand a defining technology of our age-this podcast offers a crucial and thoughtful resource. Tune in for a consistently detailed and nuanced take that treats artificial intelligence with the complexity it deserves.
Author: Language: English Episodes: 100

The Gradient: Perspectives on AI
Podcast Episodes
Terry Winograd: AI, HCI, Language, and Cognition [not-audio_url] [/not-audio_url]

Duration: 1:33:21
In episode 87 of The Gradient Podcast, Daniel Bashir speaks to Professor Terry Winograd. Professor Winograd is Professor Emeritus of Computer Science at Stanford University. His research focuses on human-computer interac…
Gil Strang: Linear Algebra and Deep Learning [not-audio_url] [/not-audio_url]

Duration: 1:00:36
In episode 86 of The Gradient Podcast, Daniel Bashir speaks to Professor Gil Strang. Professor Strang is one of the world’s foremost mathematics educators and a mathematician with contributions to finite element theory,…
Anant Agarwal: AI for Education [not-audio_url] [/not-audio_url]

Duration: 47:40
In episode 85 of The Gradient Podcast, Andrey Kurenkov speaks to Anant AgarwalAnant Agarwal is the chief platform officer of 2U, and founder of edX. Anant taught the first edX course on circuits and electronics from MIT,…
Peli Grietzer: A Mathematized Philosophy of Literature [not-audio_url] [/not-audio_url]

Duration: 2:33:33
In episode 83 of The Gradient Podcast, Daniel Bashir speaks to Peli Grietzer. Peli is a scholar whose work borrows mathematical ideas from machine learning theory to think through “ambient” and ineffable phenomena like m…
Ryan Drapeau: Battling Fraud with ML at Stripe [not-audio_url] [/not-audio_url]

Duration: 1:06:31
In episode 82 of The Gradient Podcast, Daniel Bashir speaks to Ryan Drapeau.Ryan is a Staff Software Engineer at Stripe and technical lead for Stripe’s Payment Fraud organization, which uses machine learning to help prev…
Shiv Rao: Enabling Better Patient Care with AI [not-audio_url] [/not-audio_url]

Duration: 1:00:51
In episode 81 of The Gradient Podcast, Daniel Bashir speaks to Shiv Rao.Shiv Rao, MD is the co-founder and CEO of Abridge, a healthcare conversation company that uses cutting-edge NLP and generative AI to bring context a…
Hugo Larochelle: Deep Learning as Science [not-audio_url] [/not-audio_url]

Duration: 1:48:28
In episode 80 of The Gradient Podcast, Daniel Bashir speaks to Professor Hugo Larochelle. Professor Larochelle leads the Montreal Google DeepMind team and is adjunct professor at Université de Montréal and a Canada CIFAR…
Jeremie Harris: Realistic Alignment and AI Policy [not-audio_url] [/not-audio_url]

Duration: 1:30:35
In episode 79 of The Gradient Podcast, Daniel Bashir speaks to Jeremie Harris.Jeremie is co-founder of Gladstone AI, author of the book Quantum Physics Made Me Do It, and co-host of the Last Week in AI Podcast. Jeremy pr…
Antoine Blondeau: Alpha Intelligence Capital and Investing in AI [not-audio_url] [/not-audio_url]

Duration: 59:34
In episode 78 of The Gradient Podcast, Daniel Bashir speaks to Antoine Blondeau.Antoine is a serial AI entrepreneur and Co-Founder and Managing Partner of Alpha Intelligence Capital. He was chief executive at Dejima when…