EP13: Recurrent-Depth Models and Latent Reasoning with Jonas Geiping
Podcast:The Information Bottleneck Published On: Fri Nov 07 2025 Description: In this episode, we host Jonas Geiping from ELLIS Institute & Max-Planck Institute for Intelligent Systems, Tübingen AI Center, Germany. We talked about his broad research on Recurrent-Depth Models and latent reasoning in large language models (LLMs). We talked about what these models can and can't do, what are the challenges and next breakthroughs in the field, world models, and the future of developing better models. We also talked about safety and interpretability, and the role of scaling laws in AI development.Chapters00:00 Introduction and Guest Introduction01:03 Peer Review in Preprint Servers06:57 New Developments in Coding Models09:34 Open Source Models in Europe11:00 Dynamic Layers in LLMs26:05 Training Playbook Insights30:05 Recurrent Depth Models and Reasoning Tasks43:59 Exploring Recursive Reasoning Models46:46 The Role of World Models in AI48:41 Innovations in AI Training and Simulation50:39 The Promise of Recurrent Depth Models52:34 Navigating the Future of AI Algorithms54:44 The Bitter Lesson of AI Development59:11 Advising the Next Generation of Researchers01:06:42 Safety and Interpretability in AI Models01:10:46 Scaling Laws and Their Implications01:16:19 The Role of PhDs in AI ResearchLinks and paper:Jonas' website - https://jonasgeiping.github.io/Scaling up test-time compute with latent reasoning: A recurrent depth approach - https://arxiv.org/abs/2502.05171The Smol Training Playbook: The Secrets to Building World-Class LLMs - https://huggingface.co/spaces/HuggingFaceTB/smol-training-playbookVaultGemma: A Differentially Private Gemma Model - https://arxiv.org/abs/2510.15001Music:“Kid Kodi” — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0.“Palms Down” — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0.Changes: trimmed