Sewon Min: The Science of Natural Language

Sewon Min: The Science of Natural Language

Author: Daniel Bashir March 23, 2023 Duration: 1:42:44

In episode 65 of The Gradient Podcast, Daniel Bashir speaks to Sewon Min.

Sewon is a fifth-year PhD student in the NLP group at the University of Washington, advised by Hannaneh Hajishirzi and Luke Zettlemoyer. She is a part-time visiting researcher at Meta AI and a recipient of the JP Morgan PhD Fellowship. She has previously spent time at Google Research and Salesforce research.

Have suggestions for future podcast guests (or other feedback)? Let us know here!

Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (03:00) Origin Story

* (04:20) Evolution of Sewon’s interests, question-answering and practical NLP

* (07:00) Methodology concerns about benchmarks

* (07:30) Multi-hop reading comprehension

* (09:30) Do multi-hop QA benchmarks actually measure multi-hop reasoning?

* (12:00) How models can “cheat” multi-hop benchmarks

* (13:15) Explicit compositionality

* (16:05) Commonsense reasoning and background information

* (17:30) On constructing good benchmarks

* (18:40) AmbigQA and ambiguity

* (22:20) Types of ambiguity

* (24:20) Practical possibilities for models that can handle ambiguity

* (25:45) FaVIQ and fact-checking benchmarks

* (28:45) External knowledge

* (29:45) Fact verification and “complete understanding of evidence”

* (31:30) Do models do what we expect/intuit in reading comprehension?

* (34:40) Applications for fact-checking systems

* (36:40) Intro to in-context learning (ICL)

* (38:55) Example of an ICL demonstration

* (40:45) Rethinking the Role of Demonstrations and what matters for successful ICL

* (43:00) Evidence for a Bayesian inference perspective on ICL

* (45:00) ICL + gradient descent and what it means to “learn”

* (47:00) MetaICL and efficient ICL

* (49:30) Distance between tasks and MetaICL task transfer

* (53:00) Compositional tasks for language models, compositional generalization

* (55:00) The number and diversity of meta-training tasks

* (58:30) MetaICL and Bayesian inference

* (1:00:30) Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations

* (1:02:00) The copying effect

* (1:03:30) Copying effect for non-identical examples

* (1:06:00) More thoughts on ICL

* (1:08:00) Understanding Chain-of-Thought Prompting

* (1:11:30) Bayes strikes again

* (1:12:30) Intro to Sewon’s text retrieval research

* (1:15:30) Dense Passage Retrieval (DPR)

* (1:18:40) Similarity in QA and retrieval

* (1:20:00) Improvements for DPR

* (1:21:50) Nonparametric Masked Language Modeling (NPM)

* (1:24:30) Difficulties in training NPM and solutions

* (1:26:45) Follow-on work

* (1:29:00) Important fundamental limitations of language models

* (1:31:30) Sewon’s experience doing a PhD

* (1:34:00) Research challenges suited for academics

* (1:35:00) Joys and difficulties of the PhD

* (1:36:30) Sewon’s advice for aspiring PhDs

* (1:38:30) Incentives in academia, production of knowledge

* (1:41:50) Outro

Links:

* Sewon’s homepage and Twitter

* Papers

* Solving and re-thinking benchmarks

* Multi-hop Reading Comprehension through Question Decomposition and Rescoring / Compositional Questions Do Not Necessitate Multi-hop Reasoning

* AmbigQA: Answering Ambiguous Open-domain Questions

* FaVIQ: FAct Verification from Information-seeking Questions

* Language Modeling

* Rethinking the Role of Demonstrations

* MetaICL: Learning to Learn In Context

* Towards Understanding CoT Prompting

* Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations

* Text representation/retrieval

* Dense Passage Retrieval

* Nonparametric Masked Language Modeling



Get full access to The Gradient at thegradientpub.substack.com/subscribe

Hosted by Daniel Bashir, The Gradient: Perspectives on AI moves beyond surface-level headlines to explore the intricate machinery and human ideas shaping artificial intelligence. Each episode is built on a foundation of deep research, leading to conversations that are both technically substantive and broadly accessible. You'll hear from researchers, engineers, and philosophers who are actively building and critiquing our technological future, discussing not just how AI systems work, but the larger implications of their integration into society. This isn't about speculative hype; it's a grounded examination of real progress, persistent challenges, and ethical considerations from those on the front lines. The discussions peel back layers on topics like model architecture, policy, and the fundamental science behind the algorithms becoming part of our daily lives. For anyone curious about the substance behind the buzz-whether you have a technical background or are simply keen to understand a defining technology of our age-this podcast offers a crucial and thoughtful resource. Tune in for a consistently detailed and nuanced take that treats artificial intelligence with the complexity it deserves.
Author: Language: English Episodes: 100

The Gradient: Perspectives on AI
Podcast Episodes
Laurence Liew: AI Singapore [not-audio_url] [/not-audio_url]

Duration: 50:28
In episode 98 of The Gradient Podcast, Daniel Bashir speaks to Laurence Liew.Laurence is the Director for AI Innovation at AI Singapore. He is driving the adoption of AI by the Singapore ecosystem through the 100 Experim…
Michael Levin & Adam Goldstein: Intelligence and its Many Scales [not-audio_url] [/not-audio_url]

Duration: 57:21
In episode 97 of The Gradient Podcast, Daniel Bashir speaks to Professor Michael Levin and Adam Goldstein. Professor Levin is a Distinguished Professor and Vannevar Bush Chair in the Biology Department at Tufts Universit…
Jonathan Frankle: From Lottery Tickets to LLMs [not-audio_url] [/not-audio_url]

Duration: 1:08:22
In episode 96 of The Gradient Podcast, Daniel Bashir speaks to Jonathan Frankle.Jonathan is the Chief Scientist at MosaicML and (as of release). Jonathan completed his PhD at MIT, where he investigated the properties of…
Nao Tokui: "Surfing" Musical Creativity with AI [not-audio_url] [/not-audio_url]

Duration: 1:02:19
In episode 95 of The Gradient Podcast, Daniel Bashir speaks to Nao Tokui.Nao Tokui is an artist/DJ and researcher based in Tokyo. While pursuing his Ph.D. at The University of Tokyo, he produced his first music album and…
Divyansh Kaushik: The Realities of AI Policy [not-audio_url] [/not-audio_url]

Duration: 1:17:44
In episode 94 of The Gradient Podcast, Daniel Bashir speaks to Divyansh Kaushik.Divyansh is the Associate Director for Emerging Technologies and National Security at the Federation of American Scientists where his focus…
Tal Linzen: Psycholinguistics and Language Modeling [not-audio_url] [/not-audio_url]

Duration: 1:14:50
In episode 93 of The Gradient Podcast, Daniel Bashir speaks to Professor Tal Linzen.Professor Linzen is an Associate Professor of Linguistics and Data Science at New York University and a Research Scientist at Google. He…
Kevin K. Yang: Engineering Proteins with ML [not-audio_url] [/not-audio_url]

Duration: 1:00:00
In episode 92 of The Gradient Podcast, Daniel Bashir speaks to Kevin K. Yang.Kevin is a senior researcher at Microsoft Research (MSR) who works on problems at the intersection of machine learning and biology, with an emp…
Miles Grimshaw: Benchmark, LangChain, and Investing in AI [not-audio_url] [/not-audio_url]

Duration: 1:00:47
In episode 90 of The Gradient Podcast, Daniel Bashir speaks to Miles Grimshaw.Miles is General Partner at Benchmark. He was previously a General Partner at Thrive Capital, where he helped the firm raise its fourth and fi…
Shreya Shankar: Machine Learning in the Real World [not-audio_url] [/not-audio_url]

Duration: 1:16:36
In episode 89 of The Gradient Podcast, Daniel Bashir speaks to Shreya Shankar.Shreya is a computer scientist pursuing her PhD in databases at UC Berkeley. Her research interest is in building end-to-end systems for peopl…