Kyunghyun Cho: Neural Machine Translation, Language, and Doing Good Science

Kyunghyun Cho: Neural Machine Translation, Language, and Doing Good Science

Author: Daniel Bashir February 9, 2023 Duration: 2:08:02

In episode 59 of The Gradient Podcast, Daniel Bashir speaks to Professor Kyunghyun Cho.

Professor Cho is an associate professor of computer science and data science at New York University and CIFAR Fellow of Learning in Machines & Brains. He is also a senior director of frontier research at the Prescient Design team within Genentech Research & Early Development. He was a research scientist at Facebook AI Research from 2017-2020 and a postdoctoral fellow at University of Montreal under the supervision of Prof. Yoshua Bengio after receiving his MSc and PhD degrees from Aalto University. He received the Samsung Ho-Am Prize in Engineering in 2021.

Have suggestions for future podcast guests (or other feedback)? Let us know here!

Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (02:15) How Professor Cho got into AI, going to Finland for a PhD

* (06:30) Accidental and non-accidental parts of Prof Cho’s journey, the role of timing in career trajectories

* (09:30) Prof Cho’s M.Sc. thesis on Restricted Boltzmann Machines

* (17:00) The state of autodiff at the time

* (20:00) Finding non-mainstream problems and examining limitations of mainstream approaches, anti-dogmatism, Yoshua Bengio appreciation

* (24:30) Detaching identity from work, scientific training

* (26:30) The rest of Prof Cho’s PhD, the first ICLR conference, working in Yoshua Bengio’s lab

* (34:00) Prof Cho’s isolation during his PhD and its impact on his work—transcending insecurity and working on unsexy problems

* (41:30) The importance of identifying important problems and developing an independent research program, ceiling on the number of important research problems

* (46:00) Working on Neural Machine Translation, Jointly Learning to Align and Translate

* (1:01:45) What RNNs and earlier NN architectures can still teach us, why transformers were successful

* (1:08:00) Science progresses gradually

* (1:09:00) Learning distributed representations of sentences, extending the distributional hypothesis

* (1:21:00) Difficulty and limitations in evaluation—directions of dynamic benchmarks, trainable evaluation metrics

* (1:29:30) Mixout and AdapterFusion: fine-tuning and intervening on pre-trained models, pre-training as initialization, destructive interference

* (1:39:00) Analyzing neural networks as reading tea leaves

* (1:44:45) Importance of healthy skepticism for scientists

* (1:45:30) Language-guided policies and grounding, vision-language navigation

* (1:55:30) Prof Cho’s reflections on 2022

* (2:00:00) Obligatory ChatGPT content

* (2:04:50) Finding balance

* (2:07:15) Outro

Links:

* Professor Cho’s homepage and Twitter

* Papers

* M.Sc. thesis and PhD thesis

* NMT and attention

* Properties of NMT,

* Learning Phrase Representations

* Neural machine translation by jointly learning to align and translate

* More recent work

* Learning Distributed Representations of Sentences from Unlabelled Data

* Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models

* Generative Language-Grounded Policy in Vision-and-Language Navigation with Bayes’ Rule

* AdapterFusion: Non-Destructive Task Composition for Transfer Learning



Get full access to The Gradient at thegradientpub.substack.com/subscribe

Hosted by Daniel Bashir, The Gradient: Perspectives on AI moves beyond surface-level headlines to explore the intricate machinery and human ideas shaping artificial intelligence. Each episode is built on a foundation of deep research, leading to conversations that are both technically substantive and broadly accessible. You'll hear from researchers, engineers, and philosophers who are actively building and critiquing our technological future, discussing not just how AI systems work, but the larger implications of their integration into society. This isn't about speculative hype; it's a grounded examination of real progress, persistent challenges, and ethical considerations from those on the front lines. The discussions peel back layers on topics like model architecture, policy, and the fundamental science behind the algorithms becoming part of our daily lives. For anyone curious about the substance behind the buzz-whether you have a technical background or are simply keen to understand a defining technology of our age-this podcast offers a crucial and thoughtful resource. Tune in for a consistently detailed and nuanced take that treats artificial intelligence with the complexity it deserves.
Author: Language: English Episodes: 100

The Gradient: Perspectives on AI
Podcast Episodes
Antoine Blondeau: Alpha Intelligence Capital and Investing in AI [not-audio_url] [/not-audio_url]

Duration: 59:34
In episode 78 of The Gradient Podcast, Daniel Bashir speaks to Antoine Blondeau.Antoine is a serial AI entrepreneur and Co-Founder and Managing Partner of Alpha Intelligence Capital. He was chief executive at Dejima when…
Joon Park: Generative Agents and Human-Computer Interaction [not-audio_url] [/not-audio_url]

Duration: 2:21:25
In episode 77 of The Gradient Podcast, Daniel Bashir speaks to Joon Park.Joon is a third-year PhD student at Stanford, advised by Professors Michael Bernstein and Percy Liang. He designs, builds, and evaluates interactiv…
Christoffer Holmgård: AI for Video Games [not-audio_url] [/not-audio_url]

Duration: 1:09:06
In episode 76 of The Gradient Podcast, Andrey Kurenkov speaks to Dr Christoffer HolmgårdDr. Holmgård is a co-founder and the CEO of Modl.ai, which is building AI Engine for game development. Before starting the company,…
Riley Goodside: The Art and Craft of Prompt Engineering [not-audio_url] [/not-audio_url]

Duration: 59:42
In episode 75 of The Gradient Podcast, Daniel Bashir speaks to Riley Goodside. Riley is a Staff Prompt Engineer at Scale AI. Riley began posting GPT-3 prompt examples and screenshot demonstrations in 2022. He previously…
Talia Ringer: Formal Verification and Deep Learning [not-audio_url] [/not-audio_url]

Duration: 1:45:35
In episode 74 of The Gradient Podcast, Daniel Bashir speaks to Professor Talia Ringer.Professor Ringer is an Assistant Professor with the Programming Languages, Formal Methods, and Software Engineering group at the Unive…
Brigham Hyde: AI for Clinical Decision-Making [not-audio_url] [/not-audio_url]

Duration: 41:43
In episode 72 of The Gradient Podcast, Daniel Bashir speaks to Brigham Hyde.Brigham is Co-Founder and CEO of Atropos Health. Prior to Atropos, he served as President of Data and Analytics at Eversana, a life sciences com…
Scott Aaronson: Against AI Doomerism [not-audio_url] [/not-audio_url]

Duration: 1:09:32
In episode 72 of The Gradient Podcast, Daniel Bashir speaks to Professor Scott Aaronson. Scott is the Schlumberger Centennial Chair of Computer Science at the University of Texas at Austin and director of its Quantum Inf…
Ted Underwood: Machine Learning and the Literary Imagination [not-audio_url] [/not-audio_url]

Duration: 1:43:59
In episode 71 of The Gradient Podcast, Daniel Bashir speaks to Ted Underwood.Ted is a professor in the School of Information Sciences with an appointment in the Department of English at the University of Illinois at Urba…
Irene Solaiman: AI Policy and Social Impact [not-audio_url] [/not-audio_url]

Duration: 1:12:11
In episode 70 of The Gradient Podcast, Daniel Bashir speaks to Irene Solaiman.Irene is an expert in AI safety and policy and the Policy Director at HuggingFace, where she conducts social impact research and develops publ…
Drago Anguelov: Waymo and Autonomous Vehicles [not-audio_url] [/not-audio_url]

Duration: 1:05:23
In episode 69 of The Gradient Podcast, Daniel Bashir speaks to Drago Anguelov.Drago is currently a Distinguished Scientist and Head of Research at Waymo, where he joined in 2018. Earlier, he spent eight years at Google w…