Spence Green — Enterprise-scale Machine Translation

Spence Green — Enterprise-scale Machine Translation

Author: Lukas Biewald July 17, 2021 Duration: 43:46
Spence shares his experience creating a product around human-in-the-loop machine translation, and explains how machine translation has evolved over the years. --- Spence Green is co-founder and CEO of Lilt, an AI-powered language translation platform. Lilt combines human translators and machine translation in order to produce high-quality translations more efficiently. --- 🌟 Show notes: - http://wandb.me/gd-spence-green - Transcription of the episode - Links to papers, projects, and people ⏳ Timestamps: 0:00 Sneak peak, intro 0:45 The story behind Lilt 3:08 Statistical MT vs neural MT 6:30 Domain adaptation and personalized models 8:00 The emergence of neural MT and development of Lilt 13:09 What success looks like for Lilt 18:20 Models that self-correct for gender bias 19:39 How Lilt runs its models in production 26:33 How far can MT go? 29:55 Why Lilt cares about human-computer interaction 35:04 Bilingual grammatical error correction 37:18 Human parity in MT 39:41 The unexpected challenges of prototype to production --- Get our podcast on these platforms: 👉 Apple Podcasts: http://wandb.me/apple-podcasts​​ 👉 Spotify: http://wandb.me/spotify​ 👉 Google Podcasts: http://wandb.me/google-podcasts​​ 👉 YouTube: http://wandb.me/youtube​​ 👉 Soundcloud: http://wandb.me/soundcloud​ Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack​​ Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/fully-connected

Lukas Biewald hosts Gradient Dissent: Conversations on AI, a series that moves beyond theoretical discussions to examine how artificial intelligence is actually built and deployed. Each episode features a direct, unscripted talk with a leading practitioner-you’ll hear from engineers and researchers at places like NVIDIA, Meta, Google, Lyft, and OpenAI. The focus is on the tangible challenges and breakthroughs they encounter, from initial research to the complex reality of putting models into production. This isn't about abstract futures; it's a grounded look at the decisions shaping the field right now. Biewald, bringing his perspective from Weights & Biases, steers conversations toward the practical trade-offs and collaborative efforts that define modern AI work. For anyone in technology or business who wants to understand the mechanics behind the headlines, this podcast offers a rare, candid window into the process. You’ll come away with a clearer sense of how ideas become functional systems and what it really takes to operate at the cutting edge.
Author: Language: English Episodes: 100

Gradient Dissent: Conversations on AI
Podcast Episodes
Pete Warden — Practical Applications of TinyML [not-audio_url] [/not-audio_url]

Duration: 53:28
Pete is the Technical Lead of the TensorFlow Micro team, which works on deep learning for mobile and embedded devices.Lukas and Pete talk about hacking a Raspberry Pi to run AlexNet, the power and size constraints of emb…
Pieter Abbeel — Robotics, Startups, and Robotics Startups [not-audio_url] [/not-audio_url]

Duration: 57:17
Pieter is the Chief Scientist and Co-founder at Covariant, where his team is building universal AI for robotic manipulation. Pieter also hosts The Robot Brains Podcast, in which he explores how far humanity has come in i…
Chris Albon — ML Models and Infrastructure at Wikimedia [not-audio_url] [/not-audio_url]

Duration: 56:15
In this episode we're joined by Chris Albon, Director of Machine Learning at the Wikimedia Foundation.Lukas and Chris talk about Wikimedia's approach to content moderation, what it's like to work in a place so transparen…
Emily M. Bender — Language Models and Linguistics [not-audio_url] [/not-audio_url]

Duration: 1:12:55
In this episode, Emily and Lukas dive into the problems with bigger and bigger language models, the difference between form and meaning, the limits of benchmarks, and why it's important to name the languages we study.Sho…
Josh Bloom — The Link Between Astronomy and ML [not-audio_url] [/not-audio_url]

Duration: 1:08:16
Josh explains how astronomy and machine learning have informed each other, their current limitations, and where their intersection goes from here.
Xavier Amatriain — Building AI-powered Primary Care [not-audio_url] [/not-audio_url]

Duration: 50:09
Xavier shares his experience deploying healthcare models, augmenting primary care with AI, the challenges of "ground truth" in medicine, and robustness in ML. --- Xavier Amatriain is co-founder and CTO of Curai, an ML-ba…
Roger & DJ — The Rise of Big Data and CA's COVID-19 Response [not-audio_url] [/not-audio_url]

Duration: 1:04:53
Roger and DJ share some of the history behind data science as we know it today, and reflect on their experiences working on California's COVID-19 response. --- Roger Magoulas is Senior Director of Data Strategy at Astron…
Amelia & Filip — How Pandora Deploys ML Models into Production [not-audio_url] [/not-audio_url]

Duration: 40:49
Amelia and Filip give insights into the recommender systems powering Pandora, from developing models to balancing effectiveness and efficiency in production. --- Amelia Nybakke is a Software Engineer at Pandora. Her team…
Luis Ceze — Accelerating Machine Learning Systems [not-audio_url] [/not-audio_url]

Duration: 48:28
From Apache TVM to OctoML, Luis gives direct insight into the world of ML hardware optimization, and where systems optimization is heading. --- Luis Ceze is co-founder and CEO of OctoML, co-author of the Apache TVM Proje…