Emily M. Bender — Language Models and Linguistics

Emily M. Bender — Language Models and Linguistics

Author: Lukas Biewald September 9, 2021 Duration: 1:12:55

In this episode, Emily and Lukas dive into the problems with bigger and bigger language models, the difference between form and meaning, the limits of benchmarks, and why it's important to name the languages we study.

Show notes (links to papers and transcript): http://wandb.me/gd-emily-m-bender

---

Emily M. Bender is a Professor of Linguistics at and Faculty Director of the Master's Program in Computational Linguistics at University of Washington. Her research areas include multilingual grammar engineering, variation (within and across languages), the relationship between linguistics and computational linguistics, and societal issues in NLP.

---

Timestamps:

0:00 Sneak peek, intro

1:03 Stochastic Parrots

9:57 The societal impact of big language models

16:49 How language models can be harmful

26:00 The important difference between linguistic form and meaning

34:40 The octopus thought experiment

42:11 Language acquisition and the future of language models

49:47 Why benchmarks are limited

54:38 Ways of complementing benchmarks

1:01:20 The #BenderRule

1:03:50 Language diversity and linguistics

1:12:49 Outro


Lukas Biewald hosts Gradient Dissent: Conversations on AI, a series that moves beyond theoretical discussions to examine how artificial intelligence is actually built and deployed. Each episode features a direct, unscripted talk with a leading practitioner-you’ll hear from engineers and researchers at places like NVIDIA, Meta, Google, Lyft, and OpenAI. The focus is on the tangible challenges and breakthroughs they encounter, from initial research to the complex reality of putting models into production. This isn't about abstract futures; it's a grounded look at the decisions shaping the field right now. Biewald, bringing his perspective from Weights & Biases, steers conversations toward the practical trade-offs and collaborative efforts that define modern AI work. For anyone in technology or business who wants to understand the mechanics behind the headlines, this podcast offers a rare, candid window into the process. You’ll come away with a clearer sense of how ideas become functional systems and what it really takes to operate at the cutting edge.
Author: Language: English Episodes: 100

Gradient Dissent: Conversations on AI
Podcast Episodes
Shaping the World of Robotics with Chelsea Finn [not-audio_url] [/not-audio_url]

Duration: 53:46
In the newest episode of Gradient Dissent, Chelsea Finn, Assistant Professor at Stanford's Computer Science Department, discusses the forefront of robotics and machine learning.Discover her groundbreaking work, where two…
The Power of AI in Search with You.com's Richard Socher [not-audio_url] [/not-audio_url]

Duration: 1:08:26
In the latest episode of Gradient Dissent, Richard Socher, CEO of You.com, shares his insights on the power of AI in search. The episode focuses on how advanced language models like GPT-4 are transforming search engines…
AI’s Future: Investment & Impact with Sarah Guo and Elad Gil [not-audio_url] [/not-audio_url]

Duration: 1:04:14
Explore the Future of Investment & Impact in AI with Host Lukas Biewald and Guests Elad Gill and Sarah Guo of the No Priors podcast.Sarah is the founder of Conviction VC, an AI-centric $100 million venture fund. Elad, a…
Revolutionizing AI Data Management with Jerry Liu, CEO of LlamaIndex [not-audio_url] [/not-audio_url]

Duration: 57:35
In the latest episode of Gradient Dissent, we explore the innovative features and impact of LlamaIndex in AI data management with Jerry Liu, CEO of LlamaIndex. Jerry shares insights on how LlamaIndex integrates diverse d…
Enabling LLM-Powered Applications with Harrison Chase of LangChain [not-audio_url] [/not-audio_url]

Duration: 51:54
On this episode, we’re joined by Harrison Chase, Co-Founder and CEO of LangChain. Harrison and his team at LangChain are on a mission to make the process of creating applications powered by LLMs as easy as possible.We di…