AI, Liability, and Hallucinations in a Changing Tech and Law Environment
AI, Liability, and Hallucinations in a Changing Tech and Law Environment  
Podcast: Stanford Legal
Published On: Thu May 15 2025
Description: Since ChatGPT came on the scene, numerous incidents have surfaced involving attorneys submitting court filings riddled with AI-generated hallucinations—plausible-sounding case citations that purport to support key legal propositions but are, in fact, entirely fictitious. As sanctions against attorneys mount, it seems clear there are a few kinks in the tech. Even AI tools designed specifically for lawyers can be prone to hallucinations. In this episode, we look at the potential and risks of AI-assisted tech in law and policy with two Stanford Law researchers at the forefront of this issue: RegLab Director Professor Daniel Ho and JD/PhD student and computer science researcher Mirac Suzgun. Together with several co-authors, they examine the emerging risks in two recent papers, “Profiling Legal Hallucinations in Large Language Models” (Oxford Journal of Legal Analysis, 2024) and the forthcoming “Hallucination-Free?” in the Journal of Empirical Legal Studies. Ho and Suzgun offer new insights into how legal AI is working, where it’s failing, and what’s at stake.Links:Daniel Ho  >>> Stanford Law pageStanford Institute for Human-Centered Artificial Intelligence (HAI) >>> Stanford University pageRegulation, Evaluation, and Governance Lab (RegLab) >>> Stanford University pageConnect:Episode Transcripts >>> Stanford Legal Podcast WebsiteStanford Legal Podcast >>> LinkedIn PageRich Ford >>>  Twitter/XPam Karlan >>> Stanford Law School PageStanford Law School >>> Twitter/XStanford Lawyer Magazine >>> Twitter/X (00:00:00) Introduction to AI in Legal Education (00:05:01) AI Tools in Legal Research and Writing(00:12:01) Challenges of AI-Generated Content (00:20:0) Reinforcement Learning with Human Feedback(00:30:01) Audience Q&A