No Dummy, AI Isn't Replacing Developer Jobs

No Dummy, AI Isn't Replacing Developer Jobs

Author: Noah Gift May 15, 2025 Duration: 14:41

Extensive Notes: "No Dummy: AI Will Not Replace Coders"

Introduction: The Critical Thinking Problem

  • America faces a critical thinking deficit, especially evident in narratives about AI automating developers' jobs
  • Speaker advocates for examining the narrative with core critical thinking skills
  • Suggests substituting the dominant narrative with alternative explanations

Alternative Explanation 1: Non-Productive Employees

  • Organizations contain people who do "absolutely nothing"
  • If you fire a person who does no work, there will be no impact
  • These non-productive roles exist in academics, management, and technical industries
  • Reference to David Graeber's book "Bullshit Jobs" which categorizes meaningless jobs:
    • Task masters
    • Box tickers
    • Goons
  • When these jobs are eliminated, AI didn't replace them because "the job didn't need to exist"

Alternative Explanation 2: Low-Skilled Developers

  • Some developers have "very low or no skills, even negative skills"
  • Firing someone who writes "buggy code" and replacing them with a more productive developer (even one using auto-completion tools) isn't AI replacing a job
  • These developers have "negative value to an organization"
  • Removing such developers would improve the company regardless of automation
  • Using better tools, CI/CD, or software engineering best practices to compensate for their removal isn't AI replacement

Alternative Explanation 3: Basic Automation with Traditional Tools

  • Software engineers have been automating tasks for decades without AI
  • Speaker's example: At Disney Future Animation (2003), replaced manual weekend maintenance with bash scripts
  • "A bash script is not AI. It has no form of intelligence. It's a for loop with some conditions in it."
  • Many companies have poor processes that can be easily automated with basic scripts
  • This automation has "absolutely nothing to do with AI" and has "been happening for the history of software engineering"

Alternative Explanation 4: Narrow vs. General Intelligence

  • Useful applications of machine learning exist:
    • Linear regression
    • K-means clustering
    • Autocompletion
    • Transcription
  • These are "narrow components" with "zero intelligence"
  • Each component does a specific task, not general intelligence
  • "When someone says you automated a job with a large language model, what are you talking about? It doesn't make sense."
  • LLMs are not intelligent; they're task-based systems

Alternative Explanation 5: Outsourcing

  • Companies commonly outsource jobs to lower-cost regions
  • Jobs claimed to be "taken by AI" may have been outsourced to India, Mexico, or China
  • This practice is common in America despite questionable ethics
  • Organizations may falsely claim AI automation when they've simply outsourced work

Alternative Explanation 6: Routine Corporate Layoffs

  • Large companies routinely fire ~3% of their workforce (Apple, Amazon mentioned)
  • Fear is used as a motivational tool in "toxic American corporations"
  • The "AI is coming for your job" narrative creates fear and motivation
  • More likely explanations: non-productive employees, low-skilled workers, simple automation, etc.

The Marketing and Sales Deception

  • CEOs (specifically mentions Anthropic and OpenAI) make false claims about agent capabilities
  • "The CEO of a company like Anthropic... is a liar who said that software engineering jobs will be automated with agents"
  • Speaker claims to have used these tools and found "they have no concept of intelligence"
  • Sam Altman (OpenAI) characterized as "a known liar" who "exaggerates about everything"
  • Marketing people with no software engineering background make claims about coding automation
  • Companies like NVIDIA promote AI hype to sell GPUs

Conclusion: The Real Problem

  • "AI" is a misnomer for large language models
  • These are "narrow intelligence" or "narrow machine learning" systems
  • They "do one task like autocomplete" and chain these tasks together
  • There is "no concept of intelligence embedded inside"
  • The speaker sees a bigger issue: lack of critical thinking in America
  • Warns that LLMs are "dumb as a bag of rocks" but powerful tools
  • Left in inexperienced hands, these tools could create "catastrophic software"
  • Rejects the narrative that "AI will replace software engineers" as having "absolutely zero evidence"

Key Quotes

"We have a real problem with critical thinking in America. And one of the places that is very evident is this false narrative that's been spread about AI automating developers jobs."

"If you fire a person that does no work, there will be no impact."

"I have been automating people's jobs my entire life... That's what I've been doing with basic scripts. A bash script is not AI."

"Large language models are not intelligent. How could they possibly be this mystical thing that's automating things?"

"By saying that AI is going to come for your job soon, it's a great false narrative to spread fear where people worry about all the AI is coming."

"Much more likely the story of AI is that it is a very powerful tool that is dumb as a bag of rocks and left into the hands of the inexperienced and the naive and the fools could create catastrophic software that we don't yet know how bad the effects will be."

🔥 Hot Course Offers:

🚀 Level Up Your Career:

Learn end-to-end ML engineering from industry veterans at PAIML.COM


Noah Gift guides you through a year-long journey with 52 Weeks of Cloud, a weekly exploration designed for anyone building, managing, or simply curious about modern cloud infrastructure. Each episode digs into a specific technical topic, moving beyond surface-level explanations to offer practical insights you can apply. You’ll hear detailed discussions on the platforms that power the industry-like AWS, Azure, and Google Cloud-and how to navigate multi-cloud strategies effectively. The conversation regularly delves into the orchestration of these systems with Kubernetes and the specialized world of machine learning operations, or MLOps, including the integration and implications of large language models. This isn't just theory; it's a focused look at the tools and methodologies shaping how software is deployed and scaled today. By committing to this podcast, you're essentially getting a structured, expert-led curriculum that breaks down complex subjects into manageable weekly segments, all aimed at building a comprehensive and practical understanding of the cloud ecosystem.
Author: Language: English Episodes: 225

52 Weeks of Cloud
Podcast Episodes
Academic Style Lecture on Concepts Surrounding RAG in Generative AI [not-audio_url] [/not-audio_url]

Duration: 45:17
Episode Notes: Search, Not Superintelligence: RAG's Role in Grounding Generative AISummaryI demystify RAG technology and challenge the AI hype cycle. I argue current AI is merely advanced search, not true intelligence, a…
Pragmatic AI Labs Interactive Labs Next Generation [not-audio_url] [/not-audio_url]

Duration: 2:57
Pragmatica Labs Podcast: Interactive Labs UpdateEpisode NotesAnnouncement: Updated Interactive LabsNew version of interactive labs now available on the Pragmatica Labs platformFocus on improved Rust teaching capabilities…
Meta and OpenAI LibGen Book Piracy Controversy [not-audio_url] [/not-audio_url]

Duration: 9:51
Meta and OpenAI Book Piracy Controversy: Podcast SummaryThe Unauthorized Data AcquisitionMeta (Facebook's parent company) and OpenAI downloaded millions of pirated books from Library Genesis (LibGen) to train artificial…
Rust Projects with Multiple Entry Points Like CLI and Web [not-audio_url] [/not-audio_url]

Duration: 5:32
Rust Multiple Entry Points: Architectural PatternsKey PointsCore Concept: Multiple entry points in Rust enable single codebase deployment across CLI, microservices, WebAssembly and GUI contextsImplementation Path: Initia…
Python Is Vibe Coding 1.0 [not-audio_url] [/not-audio_url]

Duration: 13:59
Podcast Notes: Vibe Coding & The Maintenance Problem in Software EngineeringEpisode SummaryIn this episode, I explore the concept of "vibe coding" - using large language models for rapid software development - and compar…
DeepSeek R2 An Atom Bomb For USA BigTech [not-audio_url] [/not-audio_url]

Duration: 12:16
Podcast Notes: DeepSeek R2 - The Tech Stock "Atom Bomb"OverviewDeepSeek R2 could heavily impact tech stocks when released (April or May 2025)Could threaten OpenAI, Anthropic, and major tech companiesUS tech market alread…
Why OpenAI and Anthropic Are So Scared and Calling for Regulation [not-audio_url] [/not-audio_url]

Duration: 12:26
Regulatory Capture in Artificial Intelligence Markets: Oligopolistic Preservation StrategiesThesis StatementAnalysis of emergent regulatory capture mechanisms employed by dominant AI firms (OpenAI, Anthropic) to establis…
Rust Paradox - Programming is Automated, but Rust is Too Hard? [not-audio_url] [/not-audio_url]

Duration: 12:39
The Rust Paradox: Systems Programming in the Epoch of Generative AII. Paradoxical Thesis ExaminationContradictory Technological NarrativesEpistemological inconsistency: programming simultaneously characterized as "automa…
Genai companies will be automated by Open Source before developers [not-audio_url] [/not-audio_url]

Duration: 19:11
Podcast Notes: Debunking Claims About AI's Future in CodingEpisode OverviewAnalysis of Anthropic CEO Dario Amodei's claim: "We're 3-6 months from AI writing 90% of code, and 12 months from AI writing essentially all code…
Debunking Fraudulant Claim Reading Same as Training LLMs [not-audio_url] [/not-audio_url]

Duration: 11:43
Pattern Matching vs. Content Comprehension: The Mathematical Case Against "Reading = Training"Mathematical Foundations of the DistinctionDimensional processing divergenceHuman reading: Sequential, unidirectional informat…