The AI Policy Podcast
The AI Policy Podcast

Join CSIS’s Gregory C. Allen, senior adviser with the Wadhwani AI Centers, on a deep dive into the world of AI policy. Every two weeks, tune in for insightful discussions regarding AI policy regulation, innovation, national security, and geopolitics. The AI Policy Podcast is by the Wadhwani Center for AI and Advanced Technologies at CSIS, a bipartisan think-tank in Washington, D.C.

In this episode, we break down the White House’s decision to let Nvidia’s H200 chips be exported to China and Greg’s case against the move (00:33). We then discuss Trump’s planned “One Rule” executive order to preempt state AI laws (18:59), examine the NDAA's proposed AI Futures Steering Committee (23:09), and analyze the Genesis Mission executive order (26:07), comparing its ambitions and funding reality to the Manhattan Project and Apollo program. We close by looking at why major insurers are seeking to exclude AI risks from corporate policies and how that could impact AI adoption, regulation, and governance (40:29).
In this episode, we start by discussing Greg's trip to India and the upcoming India AI Impact Summit in February 2026 (00:29). We then unpack the Trump Administration’s draft executive order to preempt state AI laws (07:46) and break down the European Commission’s new “digital omnibus” package, including proposed adjustments to the AI Act and broader regulatory simplification efforts (17:51). Finally, we discuss Anthropic’s report on a China-backed “highly sophisticated cyber espionage campaign" using Claude and the mixed reactions from cybersecurity and AI policy experts (37:37).
In this episode, Georgia Adamson and Saif Khan from the Institute for Progress join Greg to unpack their October 25 paper, "Should the US Sell Blackwell Chips to China?" They discuss the geopolitical context of the paper (3:26), how the rumored B30A would compare to other advanced AI chips (11:37), and the potential consequences if the US were to permit B30A exports to China (32:00). Their paper is available here.
One of the most common questions we get from listeners is how to build a successful career in AI policy—so we dedicated an entire episode to answering it. We cover the most formative experiences from Greg's career journey (3:30), general principles for professional success (45:09), and actionable tips specific to breaking into the AI policy space (1:11:52).
In this episode, we cover OpenAI’s latest video-generation model Sora 2 (1:02), concrete harms and potential risks from deepfakes (5:18), the underlying technology and its history (27:03), and how policy can mitigate harms (36:31).
In this episode, we are joined by Rep. Jay Obernolte, one of Congress’s leading voices on AI policy. We discuss his path from developing video games to serving in Congress (00:49), the work of the bipartisan House Task Force on AI and its final report (9:39), competing approaches to designing AI regulation in Congress (16:38), and prospects for federal preemption of state AI legislation (40:32). Congressman Obernolte has represented California’s 23rd district since 2021. He co-chaired the bipartisan House Task Force on AI, leading the development of an extensive December 2024 report outlining a congressional agenda for AI. He also serves as vice-chair of the Congressional AI Caucus and is the only current member of Congress with an advanced degree in Artificial Intelligence, which he earned from UCLA in 1997. Rep. Obernolte previously served in the California State Legislature.
In this episode, we are joined by economist Harry Holzer to discuss how AI is set to transform labor. Holzer was Chief Economist at the U.S. Department of Labor during the Clinton administration and is currently a Professor of Public Policy at Georgetown University. We break down the fundamentals of the labor market (4:00) and the current and future impact of AI automation (10:30). Holzer also reacts to Anthropic CEO Dario Amodei's warning that AI could eliminate half of entry-level white-collar jobs (23:32) and explains why we need better data capturing AI's impact on the labor market (52:53). Harry Holzer recently co-authored a white paper titled "Proactively Developing & Assisting the Workforce in the Age of AI," which is available here.
In this episode, we dive into California's new AI transparency law, SB 53. We explore the bill's history (00:30), contrast it with the more controversial SB 1047 (6:43), break down the specific disclosure requirements for AI labs of different scales (13:38), and discuss how industry stakeholders and policy experts have responded to the legislation (29:47).
In this episode, we're joined by Joseph Majkut, Director of CSIS' Energy Security and Climate Change Program, to take an in-depth look at energy's role in AI. We explore the current state of the U.S. electrical grid (11:34), bottlenecks in the AI data center buildout (43:45), how U.S. energy efforts compare internationally (1:16:06), and more. Joseph has co-authored three reports on AI and energy: AI for the Grid: Opportunities, Risks, and Safeguards (September 2025), The Electricity Supply Bottleneck on U.S. AI Dominance (March 2025), and The AI Power Surge: Growth Scenarios for GenAI Datacenters Through 2030 (March 2025).
In this episode, we discuss how today’s massive AI infrastructure investments compare to the Manhattan Project (00:33), China’s reported ban on Nvidia chips and its implications for export control policy (13:41), Anthropic’s $1.5 billion copyright settlement with authors (33:49), and recent multibillion-dollar AI investments by Nvidia and ASML (44:42).
In this episode, we discuss China's focus on AI adoption (00:58), the underlying factors driving investor enthusiasm (14:51), and the national security implications of China's booming AI industry (31:47).
In this episode, we are joined by Marietje Schaake, former Member of the European Parliament, to unpack the EU AI Act Code of Practice. Schaake served as Chair of the Working Group on Internal Risk Management and Governance of General-Purpose AI Providers for the Code of Practice, with a focus on AI model safety and security. We discuss the development and drafting of the EU AI Act and Code of Practice (16:47), break down how the Code helps AI companies demonstrate compliance with the Act (28:25), and explore the kinds of systemic risks the AI Act seeks to address (32:00).
In this episode, we unpack the Trump administration’s $8.9 billion deal to acquire a 9.9% stake in Intel, examining the underlying logic, financial terms, and political reactions from across the spectrum (00:33). We then cover Nvidia’s sudden halt in H20 chip production for China, its plans for a Blackwell alternative, and what Beijing’s self-sufficiency push means for the AI race (28:18).
In this episode, we'll break down the Trump administration’s new licensing agreement with Nvidia and AMD for semiconductor exports and what this development means for U.S. national security (00:35), explore concerns about an AI-driven economic bubble (22:17), and unpack recent advancements for the federal government's adoption AI after the U.S. General Services Administration approved OpenAI, Anthropic, and Google as vendors (37:18).
In this episode, we cover the renewed debate over U.S. approval of Nvidia’s H20 chip exports to China, from political pushback in Washington to reactions in Beijing (00:30). We also examine how the AI industry is responding to the EU AI Code of Practice and the reasons some companies are choosing not to sign (44:53). Read Gregory C. Allen's report on DeepSeek here. Watch or listen to our event with OSTP Director Michael Kratsios here.
On July 30, the CSIS Wadhwani AI Center hosted Michael Kratsios, Director of the White House Office of Science and Technology Policy for a discussion breaking down the recently released AI Action Plan and discuss the Trump administration’s vision for U.S. AI leadership and innovation amid strategic competition with China.  As the thirteenth Director of the White House OSTP, Mr. Kratsios oversees the development and execution of the nation’s science and technology policy agenda. He leads the Trump administration’s efforts to ensure American leadership in scientific discovery and technological innovation, including in critical and emerging technologies such as artificial intelligence, quantum computing, and biotechnology. In the first Trump administration, he served as the fourth Chief Technology Officer of the United States at the White House and as Under Secretary of Defense for Research and Engineering at the Pentagon. Watch the full event or read the transcript here: Unpacking the White House AI Action Plan with OSTP Director Michael Kratsios
In this special episode, we honor the life of Andrew Schwartz, Chief Communications Officer at CSIS and beloved  co-host of this podcast. Andrew was a mentor, a friend, and a tireless champion of the CSIS Wadhwani AI Center’s work. His humor, personal stories, and passion shaped this show and left a lasting impact on all of us. Our team, our community, and CSIS will miss him deeply.
In this episode, we are joined by Kyle Chan, postdoctoral researcher at Princeton’s Sociology Department and adjunct researcher at the RAND Corporation, to explore China's approach to AI industrial policy. We discuss the fundamentals of industrial policy and how it operates in China's digital technology sector (4:15), the evolution of China's AI industrial policy toolkit and its impact on companies (19:29), China's current AI priorities, protectionism strategies, and adoption patterns (47:05), and the future trajectory of China's AI industrial policy amid US-China competition (1:12:22). Kyle co-authored RAND's June 26 report "Full Stack: China’s Evolving Industrial Policy for AI," which is available here.
In this episode, we cover the Senate's vote to remove the moratorium on state AI laws from the reconciliation bill (00:38), the latest AI copyright court rulings involving Meta and Anthropic (7:38), key takeaways from the House Select Committee on China's AI hearing (20:55), and the latest developments surrounding DeepSeek, including export control impacts and military ties (27:45).
In this episode, we’re joined by Miles Brundage, independent AI policy researcher and former Head of Policy Research at OpenAI, and Chris Rohlf, Security Engineer at Meta and cybersecurity expert. We cover the fundamentals of cybersecurity today (9:20), whether AI is tipping the offense-defense balance (21:00), the critical challenge of securing AI model weights (34:55), the debate over “AI security doomerism” (1:03:15), and how policymakers can strengthen incentives to secure AI systems (1:08:46).
In this episode, we discuss the U.S. AI Safety Institute's rebrand to the Center for AI Standards and Innovation (00:37), BIS Undersecretary Jeffrey Kessler's testimony on semiconductor export controls (10:36), and Meta's new AI superintelligence lab and accompanying $15 billion investment in Scale AI (22:26).
On June 9, 2025,  the CSIS Wadhwani AI Center hosted Ryan Tseng, Co-Founder and President of Shield AI, a company building AI-powered software to enable autonomous capabilities for defense and national security.    Mr. Tseng leads strategic partnerships with defense and policy leaders across the United States and internationally. Under his leadership, Shield AI secured major contracts with the U.S. Special Operations Command, Air Force, Marine Corps, and Navy, while expanding internationally with offices opening in Ukraine and the UAE.    Watch the full event here.
In this episode, we discuss House Republicans’ proposed moratorium on state and local AI laws (00:57), break down AI-related appropriations across the executive branch (18:54), and unpack the safety issues and safeguards of Anthropic's newest model, Claude Opus 4 (26:51). Correction: In this episode, a quote was incorrectly attributed directly to Rep. Laurel Lee (R-Fla.). The statement—“Should the provision be stripped from the Senate reconciliation bill, some Republicans are eyeing separate legislation.”—was reported by The Hill as a paraphrase of Rep. Lee’s comments.
In this episode, we discuss Princeton researcher Kyle Chan's op-ed in the New York Times on China's industrial policy for AI and advanced technologies (0:35), what the Bureau of Industry and Security's new controls on Huawei's Ascend chips mean for China's AI ecosystem (10:09), and our biggest takeaways from President Trump's visit to the Middle East (19:07).
In this episode, we discuss the Trump administration’s decision to rescind the AI Diffusion Framework (1:34), the message of top AI executives in their recent Senate testimony (20:03), what AI adoption could mean for the IRS (35:15), the U.S. Copyright Office’s latest report on generative AI training (44:44), and what AI policy might look like in the new papacy (49:24).
In this episode, we discuss what the Trump administration's Fiscal Year 2026 budget request means for federal AI spending, what might happen to the AI Diffusion Framework before its May 15 implementation deadline, and what the Chinese Communist Party Politburo's Study Session on AI indicates about China's AI ambitions.
On May 1, 2025, the CSIS Wadhwani AI Center hosted Alexandr Wang, Founder and CEO of Scale AI, a company accelerating AI development by delivering expert-level data and technology solutions to leading AI labs, multinational enterprises, and governments. He shared his insights on key issues shaping the future of AI policy, such as U.S.-China AI competition, international AI governance, and the new administration’s approach to AI innovation, regulation, and global standards. Alexandr founded Scale AI in 2016 as a 19-year-old MIT student with the vision of providing the critical data and infrastructure needed for complex AI projects. Under his leadership, Scale AI has grown to nearly a $14 billion valuation, serving hundreds of customers across industries ranging from finance to government agencies, and creating flexible, impactful AI work for hundreds of thousands of people worldwide. Watch full event at the following link: https://www.csis.org/analysis/scale-ais-alexandr-wang-securing-us-ai-leadership
In this episode, we discuss OSTP Director Michael Kratsios’s recent speech on US technology policy at the Endless Frontiers Retreat (0:19), the Trump administration’s decision to control the Nvidia H20 chip (10:48), and what Huawei’s announcement of the Ascend 920 chip means for U.S.-China AI competition (18:24).
In this episode of the AI Policy Podcast, Wadhwani AI Center Director Gregory C. Allen is joined by Andrew Freedman, Chief Strategic Officer at Fathom, an organization whose mission is to find, build, and scale the solutions needed to help society transition to a world with AI. They will discuss the origins and purpose of Fathom, key initiatives shaping AI policy around the country such as California Senate Bill 813, and the new administration's approach to AI governance. They will also unpack the concept of “Private AI Governance” and what it means for the future of the U.S. AI ecosystem.  Andrew Freedman is the Chief Strategic Officer at Fathom, boasting over 15 years of expertise in emerging industries and regulatory frameworks. Previously, he was a Partner at Forbes Tate Partners, where he led the firm's coalition work in technology and emerging regulatory sectors. Andrew has advised governments in California, Canada, and Massachusetts, and has been a speaker at major conferences like Code Conference and Aspen Ideas Fest. Earlier in his career, Andrew served as Chief of Staff to Colorado's Lieutenant Governor, where he established the Office of Early Childhood and secured a $45 million Race to the Top Grant. He also managed the Colorado Commits to Kids campaign, raising $11 million in three months for education funding. Andrew holds a J.D. from Harvard Law School and a B.A. from Tufts University.
In this episode, we discuss what the Trump administration’s tariffs mean for the US AI ecosystem (2:42), reporting that Nvidia’s H20s will be exempt from export controls (8:58), the latest AI guidance from the White House Office of Management and Budget (OMB) (12:48), and the EU’s AI Continent Action Plan (17:07).
In this episode, we are joined by Matt Sheehan, fellow at the Carnegie Endowment for International Peace. We discuss the evolution of China's AI policymaking process over the past decade (6:45), the key institutions shaping Chinese AI policy today (44:30), and the changing nature of China's attitude to AI safety (50:55).
In this episode, we discuss AI companies' responses to the White House AI Action Plan Request For Information (RFI) related to key areas like export controls and AI governance (00:51), the release of the Joint California Policy Working Group on AI Frontier Models draft report (24:45), and how AI might be affecting the computer programming job market (40:10).
In this episode of the AI Policy Podcast, Wadhwani AI Center Director Gregory C. Allen is joined by Dean Ball, Research Fellow in the Artificial Intelligence & Progress Project at George Mason University’s Mercatus Center. They will discuss how state and local governments are approaching AI regulation, what factors are shaping these efforts, where state and local efforts intersect, and how a fractured approach to governance might affect the AI policy landscape. In addition to his role at the George Mason University’s Mercatus Center, Dean Ball is the author of the Substack Hyperdimensional. Previously, he was Senior Program Manager for the Hoover Institution's State and Local Governance Initiative. Prior to his position at the Hoover Institution, he served as Executive Director of the Calvin Coolidge Presidential Foundation, based in Plymouth, Vermont and Washington, D.C. He also worked as the Deputy Director of State and Local Policy at the Manhattan Institute for Policy Research from 2014–2018.
In this episode, we discuss the Wadhwani AI Center’s latest publication on the implications of DeepSeek for the future of export controls (0:40), Chinese company Manus AI (9:05), what Secretary Hegseth’s memo means for the DOD AI ecosystem (15:27), and xAI’s acquisition of 1 million square feet for its new data center in Memphis (21:28).
In this special episode, we are joined by Georgia Adamson, Research Associate at the CSIS Wadhwani AI Center, Lennart Heim, Associate Information Scientist at RAND, and Sam Winter-Levy, Fellow for Technology and International Affairs at the Carnegie Endowment for International Peace. We outline the biggest takeaways from our recent report about the UAE's role in the global AI race (2:34), the details of the Microsoft-G42 deal (17:21), our assessment of the UAE-China relationship when it comes to AI technology (25:45), and the future of export controls (44:07).
In our first video episode, we discuss xAI's release of the Grok 3 family of models, the Department of Government Efficiency's (DOGE) impact on the federal AI workforce, Xi Jinping's meeting with major Chinese AI company executives, and what the Evo-2 model could mean for the future of biology.
In this special episode, Greg breaks down his biggest takeaways from the Paris AI Action Summit. He discusses France’s goals for the summit (5:05), Vice President JD Vance’s speech about the US vision for AI (12:16), the EU’s approach to the convening (17:13), why the US and UK did not sign the summit declaration (20:50), and the rebranded UK AI Security Institute (23:20).
In this crossover episode with Truth of the Matter, we discuss the origins of Chinese AI Company DeepSeek (0:55), the release of its DeepSeek R1 model and what it means for the future of U.S.- China AI competition (3:05), why it prompted such a massive reaction by U.S. policymakers and the U.S. stock market (14:04), and the Trump administration's response (24:03)
In this episode, we break down President Trump's repeal of the the Biden administration's Executive Order (EO) on AI (1:00), the release of the America First Trade Policy memorandum (9:52), and the Trump administration's own AI EO (15:02). We are then joined by Lennart Heim, Senior Information Scientist at the RAND Corporation to discuss the Stargate announcement (20:40), how AI company CEOs are talking about AGI (38:36), and why the latest models from DeepSeek matter (52:02).
In this special episode of the AI Policy Podcast, Andrew, Greg, and CSIS Energy Security and Climate Change Program Director Joseph Majkut discuss the Biden administration's Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure. They consider the motivation for this measure and its primary goals (1:07), its reception among AI and hyperscaler companies (12:18), and how the Trump administration might approach AI and energy (17:50).
In this pressing episode, we break down the release of the Biden administration's Framework for Artificial Intelligence Diffusion. We discuss the rationale for this latest control (0:52), and its reception among major AI and semiconductor firms (8:14), U.S. allies (17:15), and the incoming administration (19:48).
In this episode, we discuss the December 2nd semiconductor export control update (0:45), the Trump administration’s appointment of David Sacks as the White House AI czar (5:35), the OpenAI and Anduril partnership and its implication for national security (9:31), and the latest from China’s autonomous fighter aircraft program (16:39).
On this special episode, Wadhwani AI Center director Gregory C. Allen is joined by Dr. Ben Buchanan, the White House Special Advisor for AI. They discuss the Biden administration's biggest AI policy achievements including the AI Bill of Rights, the AI Safety Institute, and the Hiroshima AI process, and the National Security Memorandum on AI.
On this special episode, New York Times reporter Ana Swanson is joined by Neil Chilson, Head of AI Policy at The Abundance Institute, Kara Frederick, Director, Tech Policy Center at The Heritage Foundation, and Brandon Pugh, Director and Senior Fellow, Cybersecurity and Emerging Threats at R Street Institute. They discuss what we can expect from the incoming Trump administration when it comes to AI policy.
In this episode, we are joined by Alondra Nelson, the Harold F. Linder Chair in the School of Social Science at the Institute of Advanced Study, and the former acting director of the White House Office of Science and Technology Policy (OSTP). We discuss her background in AI policy (1:30), the Blueprint for the AI Bill of Rights (9:43), its relationship to the White House Executive Order on AI (23:47), the Senate AI Insight Forums (29:55), the European approach to AI governance (29:55), state-level AI regulation (41:20), and how the incoming administration should approach AI policy (47:04).
In this episode, we discuss recent reporting that so called "scaling laws" are slowing and the potential implications for the policy community (0:37), the latest models coming out of the China AI ecosystem (12:37), the U.S. China - Economic Security Review Commission recommendation for a Manhattan Project for AI (19:02), and the biggest takeaways from the first draft of the European Union's General Purpose AI Code of Practice (25:46) https://www.csis.org/analysis/eu-code-practice-general-purpose-ai-key-takeaways-first-draft https://www.csis.org/analysis/ai-safety-institute-international-network-next-steps-and-recommendations https://www.csis.org/analysis/understanding-military-ai-ecosystem-ukraine https://www.csis.org/events/international-ai-policy-outlook-2025 Correction: hyperscalers Meta, Microsoft, Google, and Amazon are expected to invest $300 billion in AI and AI infrastructure in 2025.
In this episode we are joined by Vilas Dhar, President and Trustee of the Patrick J. McGovern Foundation, a 21st century $1.5 billion philanthropy advancing AI and data solutions to create a thriving, equitable, and sustainable future for all. We discuss his background (1:26), the foundation and its approach to AI philanthropy (4:11), building public sector capacity in AI (13:00), the definition of AI governance (20:07), ongoing multilateral governance efforts (23:01), how liberal and authoritarian norms affect AI (28:35), and what the future of AI might look like (30:30).
In this episode, we discuss what AI policy might look like under the second Trump administration. We dive into the first Trump administration's achievements (0:50), how the Trump campaign handled AI policy (3:37), and where the new administration might fall on key issue areas like national security (5:59), safety (7:37), export controls (11:27), open-source (14:04), and more.
In this special episode, we discuss the National Security Memorandum on AI the Biden administration released on October 24th, its primary audience and main objectives, and what the upcoming U.S. election could mean for its implementation.
On this special episode, Wadhwani AI Center director Gregory C. Allen is joined by Schuyler Moore, the first-ever Chief Technology Officer of U.S. Central Command (CENTCOM), Justin Fanelli, the Chief Technology Officer of the Department of the Navy, and Dr. Alex Miller, the Chief Technology Officer for the Chief of Staff of the Army for a discussion on the warfighter's adoption of emerging technologies. They discuss how U.S. Central Command (CENTCOM), in conjunction with the Army and Navy, has been driving the use of AI and other advanced technologies through a series of exercises such as Desert Sentry, Digital Falcon Oasis, Desert Guardian, and Project Convergence.
In this episode, we are joined by former MEP Dragoș Tudorache, co-rapporteur of the EU AI Act and Chair of the Special Committee on AI in the Digital Age. We discuss where we are in the EU AI Act roadmap (2:37), how to balance innovation and regulation (11:20), the future of the EU AI Office (25:00), and the increasing energy infrastructure demands of AI (42:30). The European Approach to Regulating Artificial Intelligence
In this episode, we discuss Nvidia's earnings report and its implications for the AI industry (0:53), the impact of China's Gallium and Germanium export controls on the global semiconductor competition (9:50), and why OpenAI is demonstrating its capabilities for the national security community (18:00).
In this episode, we are joined by Jeff Alstott, expert at the National Science Foundation (NSF) and director of the Center for Technology and Security Policy at RAND, to discuss past technology forecasting across the national security community (20:45) and a new NSF initiative called Assessing and Predicting Technology Outcomes (APTO) (31:30).  https://urldefense.com/v3/__https:/new.nsf.gov/tip/updates/nsf-invests-nearly-52m-align-science-technology__;!!KRhing!eOu1AsJT51VVjrOK6T3-do43HgthGjQ9H0JkwgwH774TXBgeHKT2IweoShOS_F8P27yWUnkbispIRQ$
In this episode, we discuss the CSIS Wadhwani Center for AI and Advanced Technologies latest report on the DOD's Collaborative Combat Aircraft (CCA) program (0:58), what recent news about AI chip smuggling means for U.S. export controls (13:40), how California's SB 1047 might affect AI regulation (23:18), and our biggest takeaways from the EU AI Act going into force (33:52). Collaborative Combat Aircraft Program: Good News, Bad News, and Unanswered Questions
In this episode, we are joined by Andrei Iancu, former Undersecretary of Commerce for Intellectual Property and former Director of the US Patent and Trademark Office (USPTO), to discuss whether AI-generated works can be copyrighted (15:52), what the latest USPTO guidance means for the patent subject matter eligibility of AI systems (22:31), who can claim inventorship for AI-facilitated inventions (36:00), and the use of AI by patent and trademark applicants and the USPTO (53:43).
On this special episode, the CSIS Wadhwani Center for AI and Advanced Technologies is pleased to host Elizabeth Kelly, Director of the United States Artificial Intelligence Safety Institute at the National Institute of Standards and Technology (NIST) at the U.S. Department of Commerce. The U.S. AI Safety Institute (AISI) was announced by Vice President Kamala Harris at the UK AI Safety Summit in November 2023. The institute was established to advance the science, practice, and adoption of AI safety in the face of risks including those to national security, public safety, and individual rights. Director Kelly will discuss the U.S. AISI’s recently released Strategic Vision, its activities under President Biden’s AI Executive Order, and its approach to the AISI global network announced at the AI Seoul Summit.
In this episode, we discuss what AI policy might like look after the 2024 U.S. presidential election. We dive into the past (1:00), present (9:50), and future (22:50) of both the Trump and Harris campaigns’ AI policy positions and where they fall on the key issue areas like safety (23:01), open-source (25:17), energy infrastructure (33:27), and more.
In this episode, DOD Chief Digital and AI Officer Dr. Radha Plumb joins Greg Allen to discuss the Chief Digital and Artificial Intelligence Office (CDAO)'s current role at the department and a preview of its upcoming projects. The CDAO was established in 2022 to create, implement, and steer the DOD’s digital transformation and adoption of AI. Under Dr. Plumb’s leadership, the CDAO recently announced a new initiative, Open Data and Applications Government-owned Interoperable Repositories (DAGIR), which will open a multi-vendor ecosystem connecting DOD end users with innovative software solutions. Dr. Plumb discusses the role of Open DAGIR and a series of other transformative projects currently underway at the CDAO.
In this episode, we discuss the state of autonomous weapons systems adoption in Ukraine (00:55), our takeaways from the Supreme Court's decision to overturn the Chevron Doctrine and the implications for AI regulation (17:35), the delayed deployment of Apple Intelligence in the EU (30:55), and a breakdown of Nvidia's deal to sell its technology to data centers in the Middle East (41:30).
In this episode, we discuss our biggest takeaways from the AI agenda at the G7 Leaders' Summit (0:41), the details of the Apple-OpenAI partnership announcement (8:05), and why Saudi Aramco's investment in Zhipu AI represents a groundbreaking moment in China-Saudi Arabia relations (16:25).
In this episode, we break down the policy outcomes from the AI Seoul Summit (0:22), the latest news from the U.S.-China AI safety talks (7:59), and why the Zhousidun (Zeus's Shield) dataset matters (16:30).
In this episode, we discuss our biggest takeaways from the bipartisan AI policy roadmap led by Senate Majority Leader Chuck Schumer (1:10), what to expect from the U.S.-China AI safety dialogue (9:55), recent updates to the DOD’s Replicator Initiative (19:25), and Microsoft’s new Intelligence Community AI Tool (29:31).
In this episode, we discuss our biggest takeaways from the new E.U.-Japan AI safety cooperation agreement (0:39), why the latest staffing update from the U.S. AI Safety Institute matters (4:57), and how the Air Force's Collaborative Combat Aircraft (CCA) contract award is changing the way the DOD develops autonomous systems (11:40). aipolicypodcast@csis.org Wadhwani Center for AI and Advanced Technologies | CSIS
In this episode, we discuss Microsoft's investment in G42 and questions surrounding G42's ties to China (1:12), the latest reporting about the Israeli military's use of AI and policy implications advanced technologies in warfare (9:23), and Meta's new watermarking policy (23:01). aipolicypodcast@csis.org Wadhwani Center for AI and Advanced Technologies | CSIS The DARPA Perspective on AI and Autonomy at the DOD | CSIS Events Scaling AI-enabled Capabilities at the DOD: Government and Industry Perspectives: The State of DOD AI and Autonomy Policy:
In this episode, we discuss a framework for understanding the rapidly changing AI policy landscape (0:53), the first-of-its-kind U.S. and U.K. partnership on AI Safety (8:20), Open AI's Voice Engine system (10:53), OMB's latest AI policy announcement (18:00), and Mexico's new role in AI infrastructure (21:50).
In this episode, we give an insider's view on the March G7's Digital and Tech Ministerial meetings (6:15), the Hiroshima Code of Conduct and it's interaction with the EU AI Act (10:50), a breakdown of the new TikTok bill (23:25), and how AI use impacts the already overstressed power grids (40:49) AI and Advanced Technology Insights Wadhwani Center for AI & Advanced Technologies Website Our New Report: Advancing the Hiroshima AI Process Code of Conduct under the 2024 Italian G7 Presidency: Timeline and Recommendations Contact us: aipolicypodcast@csis.org
In this episode, we are joined by Volvo CEO Jim Rowan to discuss how AI and autonomy are transforming Volvo's business (3:28), the fragmented regulatory environment around autonomous driving (17:00), navigating the increasingly tense U.S.-China relationship (23:10), its implications for the EV industry (33:14), and upcoming policy changes to look out for (52:00).
In this episode, we discuss the latest legal issues facing OpenAI (0:38), the new Microsoft-Mistral partnership(15:25), and what we can expect from the newly founded U.S. AI Safety Institute Consortium (18:36).
On this special episode of the AI Policy Podcast, we are joined by Chris Miller, author of Chip War: the Fight for the World's Most Critical Technology, and Professor of International History at Tufts University. We discuss Secretary of Commerce Gina Raimondo's CHIPS Act announcement (1:38), how the semiconductor landscape has changed since Chip War was published (6:39), why U.S. export controls on Russia and China are leaky (12:29), and the latest news from the Chinese semiconductor industry (22:58)
On this episode of the AI Policy Podcast, we discuss the release of OpenAI's Sora tool (0:35), its implications for media (4:10), the risks associated with the model (7:05), and what it all means for elections (18:33) and copyright (22:17).
On this episode, we discuss the most recent E.U. AI Act milestone (1:19), the latest from the AI chip war (12:23), and U.S.-China AI safety dialogues (22:15).
On this episode of the AI Policy Podcast, we discuss global technology competition with Senators Michael Bennet and Todd Young.
On this episode of the AI Policy Podcast, we discuss our outlook for AI in 2024.
On the first episode of the AI Policy Podcast, listen to our 2023 AI year in review. Learn about the biggest developments, newest policies, and our responses to it all.
CSIS’ Gregory C. Allen, Director of the Wadhwani Center for AI and Advanced Technologies, is joined by cohost H. Andrew Schwartz on a deep dive into the world of AI policy. Every two weeks, join for insightful discussions regarding AI policy regulation, innovation, national security, and geopolitics. The AI Policy Podcast is by the Wadhwani Center for AI and Advanced Technologies at CSIS, a bipartisan think-tank in Washington, D.C.