Podcast:Machine Learning Street Talk (MLST) Published On: Tue Mar 03 2026 Description: Dive into the realities of AI-assisted coding, the origins of modern fine-tuning, and the cognitive science behind machine learning with fast.ai founder Jeremy Howard. In this episode, we unpack why AI might be turning software engineering into a slot machine and how to maintain true technical intuition in the age of large language models.GTC is coming, the premier AI conference, great opportunity to learn about AI. NVIDIA and partners will showcase breakthroughs in physical AI, AI factories, agentic AI, and inference, exploring the next wave of AI innovation for developers and researchers. Register for virtual GTC for free, using my link and win NVIDIA DGX Spark (https://nvda.ws/4qQ0LMg)Jeremy Howard is a renowned data scientist, researcher, entrepreneur, and educator. As the co-founder of fast.ai, former President of Kaggle, and the creator of ULMFiT, Jeremy has spent decades democratizing deep learning. His pioneering work laid the foundation for modern transfer learning and the pre-training and fine-tuning paradigm that powers today's language models.Key Topics and Main Insights Discussed:- The Origins of ULMFiT and Fine-Tuning- The Vibe Coding Illusion and Software Engineering- Cognitive Science, Friction, and Learning- The Future of DevelopersRESCRIPT: https://app.rescript.info/public/share/BhX5zP3b0m63srLOQDKBTFTooSzEMh_ARwmDG_h_izkJeremy Howard:https://x.com/jeremyphowardhttps://www.answer.ai/---TIMESTAMPS (fixed):00:00:00 Introduction & GTC Sponsor00:04:30 ULMFiT & The Birth of Fine-Tuning00:12:00 Intuition & The Mechanics of Learning00:18:30 Abstraction Hierarchies & AI Creativity00:23:00 Claude Code & The Interpolation Illusion00:27:30 Coding vs. Software Engineering00:30:00 Cosplaying Intelligence: Dennett vs. Searle00:36:30 Automation, Radiology & Desirable Difficulty00:42:30 Organizational Knowledge & The Slope00:48:00 Vibe Coding as a Slot Machine00:54:00 The Erosion of Control in Software01:01:00 Interactive Programming & REPL Environments01:05:00 The Notebook Debate & Exploratory Science01:17:30 AI Existential Risk & Power Centralization01:24:20 Current Risks, Privacy & Enfeeblement---REFERENCES:Blog Post:[00:03:00] fast.ai Blog: Self-Supervised Learninghttps://www.fast.ai/posts/2020-01-13-self_supervised.html[00:13:30] DeepMind Blog: Gemini Deep Thinkhttps://deepmind.google/blog/accelerating-mathematical-and-scientific-discovery-with-gemini-deep-think/[00:19:30] Modular Blog: Claude C Compiler analysishttps://www.modular.com/blog/the-claude-c-compiler-what-it-reveals-about-the-future-of-software[00:19:45] Anthropic Engineering Blog: Building C Compilerhttps://www.anthropic.com/engineering/building-c-compiler[00:48:00] Cursor Blog: Scaling Agentshttps://cursor.com/blog/scaling-agents[01:05:15] fast.ai Blog: NB Dev Merged Driverhttps://www.fast.ai/posts/2022-08-25-jupyter-git.html[01:17:30] Jeremy Howard: Response to AI Risk Letterhttps://www.normaltech.ai/p/is-avoiding-extinction-from-ai-reallyBook:[00:08:30] M. Chirimuuta: The Brain Abstractedhttps://mitpress.mit.edu/9780262548045/the-brain-abstracted/[00:30:00] Daniel Dennett: Consciousness Explainedhttps://www.amazon.com/Consciousness-Explained-Daniel-C-Dennett/dp/0316180661[00:42:30] Cesar Hidalgo: Infinite Alphabet / Laws of Knowledgehttps://www.amazon.com/Infinite-Alphabet-Laws-Knowledge/dp/0241655676Archive Article:[00:13:45] MLST Archive: Why Creativity Cannot Be Interpolatedhttps://archive.mlst.ai/read/why-creativity-cannot-be-interpolatedResearch Study:[00:24:30] METR Study: AI OS Developmenthttps://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/Paper:[00:24:45] Fred Brooks: No Silver Bullethttps://www.cs.unc.edu/techreports/86-020.pdf[00:30:15] John Searle: Minds, Brains, and Programshttps://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/minds-brains-and-programs/DC644B47A4299C637C89772FACC2706A