Venkatesh Rao: Protocols, Intelligence, and Scaling

Venkatesh Rao: Protocols, Intelligence, and Scaling

Author: Daniel Bashir March 7, 2024 Duration: 2:18:35

“There is this move from generality in a relative sense of ‘we are not as specialized as insects’ to generality in the sense of omnipotent, omniscient, godlike capabilities. And I think there's something very dangerous that happens there, which is you start thinking of the word ‘general’ in completely unhinged ways.”

In episode 114 of The Gradient Podcast, Daniel Bashir speaks to Venkatesh Rao.

Venkatesh is a writer and consultant. He has been writing the widely read Ribbonfarm blog since 2007, and more recently, the popular Ribbonfarm Studio Substack newsletter. He is the author of Tempo, a book on timing and decision-making, and is currently working on his second book, on the foundations of temporality. He has been an independent consultant since 2011, supporting senior executives in the technology industry. His work in recent years has focused on AI, semiconductor, sustainability, and protocol technology sectors. He holds a PhD in control theory (2003) from the University of Michigan. He is currently based in the Seattle area, and enjoys dabbling in robotics in his spare time. You can learn more about his work at venkateshrao.com

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (01:38) Origins of Ribbonfarm and Venkat’s academic background

* (04:23) Voice and recurring themes in Venkat’s work

* (11:45) Patch models and multi-agent systems: integrating philosophy of language, balancing realism with tractability

* (21:00) More on abstractions vs. tractability in Venkat’s work

* (29:07) Scaling of industrial value systems, characterizing AI as a discipline

* (39:25) Emergent science, intelligence and abstractions, presuppositions in science, generality and universality, cameras and engines

* (55:05) Psychometric terms

* (1:09:07) Inductive biases (yes I mentioned the No Free Lunch Theorem and then just talked about the definition of inductive bias and not the actual theorem 🤡)

* (1:18:13) LLM training and efficiency, comparing LLMs to humans

* (1:23:35) Experiential age, analogies for knowledge transfer

* (1:30:50) More clarification on the analogy

* (1:37:20) Massed Muddler Intelligence and protocols

* (1:38:40) Introducing protocols and the Summer of protocols

* (1:49:15) Evolution of protocols, hardness

* (1:54:20) LLMs, protocols, time, future visions, and progress

* (2:01:33) Protocols, drifting from value systems, friction, compiling explicit knowledge

* (2:14:23) Directions for ML people in protocols research

* (2:18:05) Outro

Links:

* Venkat’s Twitter and homepage

* Mediocre Computing

* Summer of Protocols and 2024 Call for Applications (apply!)

* Essays discussed

* Patch models and their applications to multivehicle command and control

* From Mediocre Computing

* Text is All You Need

* Magic, Mundanity, and Deep Protocolization

* A Camera, Not an Engine

* Massed Muddler Intelligence

* On protocols

* The Unreasonable Sufficiency of Protocols

* Protocols Don’t Build Pyramids

* Protocols in (Emergency) Time

* Atoms, Institutions, Blockchains



Get full access to The Gradient at thegradientpub.substack.com/subscribe

Hosted by Daniel Bashir, The Gradient: Perspectives on AI moves beyond surface-level headlines to explore the intricate machinery and human ideas shaping artificial intelligence. Each episode is built on a foundation of deep research, leading to conversations that are both technically substantive and broadly accessible. You'll hear from researchers, engineers, and philosophers who are actively building and critiquing our technological future, discussing not just how AI systems work, but the larger implications of their integration into society. This isn't about speculative hype; it's a grounded examination of real progress, persistent challenges, and ethical considerations from those on the front lines. The discussions peel back layers on topics like model architecture, policy, and the fundamental science behind the algorithms becoming part of our daily lives. For anyone curious about the substance behind the buzz-whether you have a technical background or are simply keen to understand a defining technology of our age-this podcast offers a crucial and thoughtful resource. Tune in for a consistently detailed and nuanced take that treats artificial intelligence with the complexity it deserves.
Author: Language: English Episodes: 100

The Gradient: Perspectives on AI
Podcast Episodes
C. Thi Nguyen: Values, Legibility, and Gamification [not-audio_url] [/not-audio_url]

Duration: 1:30:13
Episode 127I spoke with Christopher Thi Nguyen about:* How we lose control of our values* The tradeoffs of legibility, aggregation, and simplification* Gamification and its risksEnjoy—and let me know what you think!C. Th…
Vivek Natarajan: Towards Biomedical AI [not-audio_url] [/not-audio_url]

Duration: 1:55:03
Episode 126I spoke with Vivek Natarajan about:* Improving access to medical knowledge with AI* How an LLM for medicine should behave* Aspects of training Med-PaLM and AMIE* How to facilitate appropriate amounts of trust…
Thomas Mullaney: A Global History of the Information Age [not-audio_url] [/not-audio_url]

Duration: 1:43:45
Episode 125False universalism freaks me out. It doesn’t freak me out as a first principle because of epistemic violence; it freaks me out because it works. I spoke with Professor Thomas Mullaney about:* Telling stories a…
Seth Lazar: Normative Philosophy of Computing [not-audio_url] [/not-audio_url]

Duration: 1:50:17
Episode 124You may think you’re doing a priori reasoning, but actually you’re just over-generalizing from your current experience of technology.I spoke with Professor Seth Lazar about:* Why managing near-term and long-te…
Suhail Doshi: The Future of Computer Vision [not-audio_url] [/not-audio_url]

Duration: 1:08:07
Episode 123I spoke with Suhail Doshi about:* Why benchmarks aren’t prepared for tomorrow’s AI models* How he thinks about artists in a world with advanced AI tools* Building a unified computer vision model that can gener…
Azeem Azhar: The Exponential View [not-audio_url] [/not-audio_url]

Duration: 1:46:25
Episode 122I spoke with Azeem Azhar about:* The speed of progress in AI* Historical context for some of the terminology we use and how we think about technology* What we might want our future to look likeAzeem is an entr…
David Thorstad: Bounded Rationality and the Case Against Longtermism [not-audio_url] [/not-audio_url]

Duration: 2:19:02
Episode 122I spoke with Professor David Thorstad about:* The practical difficulties of doing interdisciplinary work* Why theories of human rationality should account for boundedness, heuristics, and other cognitive limit…
Michael Sipser: Problems in the Theory of Computation [not-audio_url] [/not-audio_url]

Duration: 1:28:21
In episode 119 of The Gradient Podcast, Daniel Bashir speaks to Professor Michael Sipser.Professor Sipser is the Donner Professor of Mathematics and member of the Computer Science and Artificial Intelligence Laboratory a…