Beyond Leaderboards: LMArena’s Mission to Make AI Reliable
Podcast:AI + a16z Published On: Fri May 30 2025 Description: LMArena cofounders Anastasios N. Angelopoulos, Wei-Lin Chiang, and Ion Stoica sit down with a16z general partner Anjney Midha to talk about the future of AI evaluation. As benchmarks struggle to keep up with the pace of real-world deployment, LMArena is reframing the problem: what if the best way to test AI models is to put them in front of millions of users and let them vote? The team discusses how Arena evolved from a research side project into a key part of the AI stack, why fresh and subjective data is crucial for reliability, and what it means to build a CI/CD pipeline for large models.They also explore:Why expert-only benchmarks are no longer enough.How user preferences reveal model capabilities — and their limits.What it takes to build personalized leaderboards and evaluation SDKs.Why real-time testing is foundational for mission-critical AI.Follow everyone on X:Anastasios N. AngelopoulosWei-Lin ChiangIon StoicaAnjney MidhaTimestamps0:04 - LLM evaluation: From consumer chatbots to mission-critical systems6:04 - Style and substance: Crowdsourcing expertise18:51 - Building immunity to overfitting and gaming the system29:49 - The roots of LMArena41:29 - Proving the value of academic AI research48:28 - Scaling LMArena and starting a company59:59 - Benchmarks, evaluations, and the value of ranking LLMs1:12:13 - The challenges of measuring AI reliability1:17:57 - Expanding beyond binary rankings as models evolve1:28:07 - A leaderboard for each prompt1:31:28 - The LMArena roadmap1:34:29 - The importance of open source and openness1:43:10 - Adapting to agents (and other AI evolutions) Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.