David Shapiro Part II: Unaligned Superintelligence Is Totally Fine?
David Shapiro Part II: Unaligned Superintelligence Is Totally Fine?  
Podcast: Doom Debates
Published On: Thu Aug 22 2024
Description: Today I’m reacting to David Shapiro’s response to my previous episode, and also to David’s latest episode with poker champion & effective altruist Igor Kurganov.I challenge David's optimistic stance on superintelligent AI inherently aligning with human values. We touch on factors like instrumental convergence and resource competition. David and I continue to clash over whether we should pause AI development to mitigate potential catastrophic risks. I also respond to David's critiques of AI safety advocates.00:00 Introduction01:08 David's Response and Engagement03:02 The Corrigibility Problem05:38 Nirvana Fallacy10:57 Prophecy and Faith-Based Assertions22:47 AI Coexistence with Humanity35:17 Does Curiosity Make AI Value Humans?38:56 Instrumental Convergence and AI's Goals46:14 The Fermi Paradox and AI's Expansion51:51 The Future of Human and AI Coexistence01:04:56 Concluding ThoughtsJoin the conversation on DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of extinction. Thanks for listening. Get full access to Doom Debates at lironshapira.substack.com/subscribe