LLMs in 2026: What’s Real, What’s Hype, and What’s Coming Next
Podcast:Digital Disruption with Geoff Nielson Published On: Mon Feb 23 2026 Description: Is AI actually going to replace developers? Or is the hype getting ahead of reality?On this episode of Digital Disruption, we’re joined by Sebastian Raschka, AI Research Engineer and author.Sebastian Raschka sits down with Geoff Nielson to unpack the real state of Large Language Models (LLMs) in 2026. As an LLM research engineer, Sebastian bridges deep technical expertise with practical, real-world AI implementation. In this conversation, he cuts through AI hype to focus on what’s actually achievable with modern LLMs, reasoning models, reinforcement learning, and inference scaling and where the limitations still exist. Sebastian explains why most companies should not build a large language model from scratch, but also why understanding the fundamentals may be one of the most important investments technology leaders can make. This conversation breaks down: ◼️Why coding is currently the strongest LLM use case ◼️Why “reasoning” models still fail simple tasks like counting letters in “strawberry” ◼️The reality behind Math Olympiad gold-level AI claims ◼️The true cost of training large models (millions in GPU compute) ◼️The privacy risks of uploading proprietary data into APIs ◼️How enterprises should think about fine-tuning vs API-based prompting ◼️Why benchmarks and leaderboards can be misleading Sebastian Raschka has over a decade of experience in artificial intelligence and machine learning. His work bridges academia and industry, serving as a Senior Engineer at Lightning AI and as a faculty member at the University of Wisconsin–Madison. He is the author of Build a Large Language Model from Scratch and is widely recognized for his practical, code-driven approach to AI education and research. His expertise lies in LLM research, transformer architectures, reinforcement learning, and the development of high-performance AI systems, with a strong focus on real-world implementation.In this video:00:00 Intro01:23 The Rise of “Reasoning” and Thinking Models03:06 Inference scaling vs training scaling06:17 What LLMs are actually good (and bad) at07:09 The “Strawberry” Problem and Reasoning Limits09:00 Tool use and why LLMs don’t need to count letters10:20 Math Olympiads & self-refinement techniques12:01 Why coding is the killer use case13:28 Does AI make developers obsolete?18:02 The Reality of 10x developer productivity claims21:43 Generalist vs specialized models23:53 Build from scratch vs fine-tune vs API prompting25:01The true cost of training an LLM27:33 API customization vs owning your model29:12 Who should build an LLM from scratch?33:16 Data requirements & why you need terabytes34:28 Enterprise data challenges35:40 Retrieval-Augmented Generation (RAG) explained46:05 Multi-agent systems & tool calling49:48 The problem with LLM benchmarks55:43 Using LLMs as judges58:00 Biggest misconceptions about LLMs1:04:19 Reinforcement learning with verifiable rewards1:06:32 Advice for technology leaders1:11:48 Escaping AI hype through fundamentalsConnect with Sebastian:LinkedIn: https://www.linkedin.com/in/sebastianraschka/X: https://x.com/rasbtConnect with Sebastian:LinkedIn: https://www.linkedin.com/in/sebastianraschka/X: https://x.com/rasbt Our links:Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG