The Idiocracy Trap: Why Smart Machines are making Humans Dumb & Dumber
Jacob Ward warned us. Back in January 2022, the Oakland-based tech journalist published The Loop, a warning about how AI is creating a world without choices. He even came on this show to warn about AI’s threat to humanity. Three years later, we’ve all caught up with Ward. So where is he now on AI? Moderately vindicated but more pessimistic. His original thesis has proven disturbingly accurate - we’re outsourcing decisions to AI at an accelerating pace. But he admits his book’s weakest section was “how to fight back,” and he still lacks concrete solutions. His fear has evolved: less worried about robot overlords, he is now more concerned about an “Idiocracy” of AI human serfs. It’s a dystopian scenario where humans become so stupid that they won’t even be able to appreciate Gore Vidal’s quip that “I told you so” are the four most beautiful words in the English language.
I couldn’t resist asking Anthropic’s Claude about Ward’s conclusions (not, of course, that I rely on it for anything). “Anecdotal” is how it countered with characteristic coolness. Well Claude wouldn’t say that, wouldn’t it?
1. The “Idiocracy” threat is more immediate than AGI concerns Ward argues we should fear humans becoming cognitively dependent rather than superintelligent machines taking over. He’s seeing this now - Berkeley students can’t distinguish between reading books and AI summaries.
2. AI follows market incentives, not ethical principles Despite early rhetoric about responsible development, Ward observes the industry prioritizing profit over principles. Companies are openly betting on when single-person billion-dollar businesses will emerge, signaling massive job displacement.
3. The resistance strategy remains unclear Ward admits his book’s weakness was the “how to fight back” section, and he still lacks concrete solutions. The few examples of resistance he cites - like Signal’s president protecting user data from training algorithms - require significant financial sacrifice.
4. Economic concentration creates systemic risk The massive capital investments (Nvidia’s $100 billion into OpenAI) create dangerous loops where AI companies essentially invest in themselves. Ward warns this resembles classic bubble dynamics that could crash the broader economy.
5. “Weak perfection” is necessary for human development Ward argues we need friction and inefficiency in our systems to maintain critical thinking skills. AI’s promise to eliminate all cognitive work may eliminate the mental exercise that keeps humans intellectually capable.
Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
Can Billionaire Backlash Save Democracy? Pepper Culpepper on our Age of Corporate Scandal
Yes, It's Fascism: Jon Rauch on Trump and the F Word
Californian True Crime: A Killing in Cannabis
Rage in the American Republic
Documenting America: How to See Beyond the Algorithm
Whoosh! That Really Was a Week in Tech: Winner-Take-All AI and the $1 Trillion Selloff
Catching More Than Passes From Bobby: Stephen Schlesinger on what RFK Can Still Teach America
Your Data Will Be Used Against You: Andrew Guthrie Ferguson on Policing in the Age of Self-Surveillance
To Catch a Fascist: The Ethics of Unmasking the Radical Right
How Meat Can Save the Planet: The Vegan Case
It's Always Exploding Somewhere: Why No Weapon Is Ever Perfect
Where's the Countercultural Outrage to Trump?
AI's Adolescent Crisis: And It's Still Just a Toddler