Tal Linzen: Psycholinguistics and Language Modeling

Tal Linzen: Psycholinguistics and Language Modeling

Author: Daniel Bashir October 5, 2023 Duration: 1:14:50

In episode 93 of The Gradient Podcast, Daniel Bashir speaks to Professor Tal Linzen.

Professor Linzen is an Associate Professor of Linguistics and Data Science at New York University and a Research Scientist at Google. He directs the Computation and Psycholinguistics Lab, where he and his collaborators use behavioral experiments and computational methods to study how people learn and understand language. They also develop methods for evaluating, understanding, and improving computational systems for language processing.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (02:25) Prof. Linzen’s background

* (05:37) Back and forth between psycholinguistics and deep learning research, LM evaluation

* (08:40) How can deep learning successes/failures help us understand human language use, methodological concerns, comparing human representations to LM representations

* (14:22) Behavioral capacities and degrees of freedom in representations

* (16:40) How LMs are becoming less and less like humans

* (19:25) Assessing LSTMs’ ability to learn syntax-sensitive dependencies

* (22:48) Similarities between structure-sensitive dependencies, sophistication of syntactic representations

* (25:30) RNNs implicitly implement tensor-product representations—vector representations of symbolic structures

* (29:45) Representations required to solve certain tasks, difficulty of natural language

* (33:25) Accelerating progress towards human-like linguistic generalization

* (34:30) The pre-training agnostic identically distributed evaluation paradigm

* (39:50) Ways to mitigate differences in evaluation

* (44:20) Surprisal does not explain syntactic disambiguation difficulty

* (45:00) How to measure processing difficulty, predictability and processing difficulty

* (49:20) What other factors influence processing difficulty?

* (53:10) How to plant trees in language models

* (55:45) Architectural influences on generalizing knowledge of linguistic structure

* (58:20) “Cognitively relevant regimes” and speed of generalization

* (1:00:45) Acquisition of syntax and sampling simpler vs. more complex sentences

* (1:04:03) Curriculum learning for progressively more complicated syntax

* (1:05:35) Hypothesizing tree-structured representations

* (1:08:00) Reflecting on a prediction from the past

* (1:10:15) Goals and “the correct direction” in AI research

* (1:14:04) Outro

Links:

* Prof. Linzen’s Twitter and homepage

* Papers

* Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies

* RNNS Implicitly Implement Tensor-Product Representations

* How Can We Accelerate Progress Towards Human-like Linguistic Generalization?

* Surprisal does not explain syntactic disambiguation difficulty: evidence from a large-scale benchmark

* How to Plant Trees in LMs: Data and Architectural Effects on the Emergence of Syntactic Inductive Biases



Get full access to The Gradient at thegradientpub.substack.com/subscribe

Hosted by Daniel Bashir, The Gradient: Perspectives on AI moves beyond surface-level headlines to explore the intricate machinery and human ideas shaping artificial intelligence. Each episode is built on a foundation of deep research, leading to conversations that are both technically substantive and broadly accessible. You'll hear from researchers, engineers, and philosophers who are actively building and critiquing our technological future, discussing not just how AI systems work, but the larger implications of their integration into society. This isn't about speculative hype; it's a grounded examination of real progress, persistent challenges, and ethical considerations from those on the front lines. The discussions peel back layers on topics like model architecture, policy, and the fundamental science behind the algorithms becoming part of our daily lives. For anyone curious about the substance behind the buzz-whether you have a technical background or are simply keen to understand a defining technology of our age-this podcast offers a crucial and thoughtful resource. Tune in for a consistently detailed and nuanced take that treats artificial intelligence with the complexity it deserves.
Author: Language: English Episodes: 100

The Gradient: Perspectives on AI
Podcast Episodes
Linus Lee: At the Boundary of Machine and Mind [not-audio_url] [/not-audio_url]

Duration: 2:28:46
In episode 56 of The Gradient Podcast, Daniel Bashir speaks to Linus Lee. Linus is an independent researcher interested in the future of knowledge representation and creative work aided by machine understanding of langua…
Suresh Venkatasubramanian: An AI Bill of Rights [not-audio_url] [/not-audio_url]

Duration: 1:40:58
In episode 55 of The Gradient Podcast, Daniel Bashir speaks to Professor Suresh Venkatasubramanian. Professor Venkatasubramanian is a Professor of Computer Science and Data Science at Brown University, where his research…
Melanie Mitchell: Abstraction and Analogy in AI [not-audio_url] [/not-audio_url]

Duration: 54:47
Have suggestions for future podcast guests (or other feedback)? Let us know here!In episode 53 of The Gradient Podcast, Daniel Bashir speaks to Professor Melanie Mitchell. Professor Mitchell is the Davis Professor at the…
Marc Bellemare: Distributional Reinforcement Learning [not-audio_url] [/not-audio_url]

Duration: 1:12:22
Have suggestions for future podcast guests (or other feedback)? Let us know here!In episode 52 of The Gradient Podcast, Daniel Bashir speaks to Professor Marc Bellemare. Professor Bellemare leads the reinforcement learni…
François Chollet: Keras and Measures of Intelligence [not-audio_url] [/not-audio_url]

Duration: 1:28:50
In episode 51 of The Gradient Podcast, Daniel Bashir speaks to François Chollet.François is a Senior Staff Software Engineer at Google and creator of the Keras deep learning library, which has enabled many people (includ…
Yoshua Bengio: The Past, Present, and Future of Deep Learning [not-audio_url] [/not-audio_url]

Duration: 1:14:09
Happy episode 50! This week’s episode is being released on Monday to avoid Thanksgiving. Have suggestions for future podcast guests (or other feedback)? Let us know here!In episode 50 of The Gradient Podcast, Daniel Bash…
Kanjun Qiu and Josh Albrecht: Generally Intelligent [not-audio_url] [/not-audio_url]

Duration: 47:21
In episode 49 of The Gradient Podcast, Daniel Bashir speaks to Kanjun Qiu and Josh Albrecht. Kanjun and Josh are CEO and CTO of Generally Intelligent, an AI startup aiming to develop general-purpose agents with human-lik…

«1...678910