
The AI Policy Podcast
by Center for Strategic and International Studies
Insights from recent episode analysis
Audience Interest
Podcast Focus
Publishing Consistency
Platform Reach
Insights are generated by CastFox AI using publicly available data, episode content, and proprietary models.
Most discussed topics
Brands & references
Est. Listeners
Based on iTunes & Spotify (publisher stats).
- Per-Episode Audience
Est. listeners per new episode within ~30 days
10,001 - 25,000 - Monthly Reach
Unique listeners across all episodes (30 days)
25,001 - 75,000 - Active Followers
Loyal subscribers who consistently listen
15,001 - 40,000
Market Insights
Platform Distribution
Reach across major podcast platforms, updated hourly
Total Followers
—
Total Plays
—
Total Reviews
—
* Data sourced directly from platform APIs and aggregated hourly across all major podcast directories.
On the show
From 10 epsHosts
Recent guests
Recent episodes
The Next Chapter of the AI Policy Podcast
Apr 16, 2026
8m 01s
Unpacking Russian Military AI with Kateryna Bondar
Apr 14, 2026
1h 15m 08s
Inside Project Maven and AI-Powered Warfare with Katrina Manson
Mar 26, 2026
1h 01m 29s
Trump's National AI Framework and Super Micro's Chip Smuggling Indictment
Mar 24, 2026
1h 02m 41s
Anthropic Goes to Court While Claude Goes to War in Iran
Mar 11, 2026
1h 03m 17s
Social Links & Contact
Official channels & resources
RSS Feed
Login
| Date | Episode | Topics | Guests | Brands | Places | Keywords | Sponsor | Length | |
|---|---|---|---|---|---|---|---|---|---|
| 4/16/26 | The Next Chapter of the AI Policy Podcast✨ | AI policypodcast transition+3 | — | Wadhwani AI CenterCenter for Strategic and International Studies | — | AI policypodcast+3 | — | 8m 01s | |
| 4/14/26 | ![]() Unpacking Russian Military AI with Kateryna Bondar✨ | military AIRussia+4 | Kateryna Bondar | Wadhwani AI CenterHow Russia Is Building a Sovereign Drone Ecosystem for AI-Driven Autonomy+1 | UkraineRussia | military AIRussia+5 | — | 1h 15m 08s | |
| 3/26/26 | Inside Project Maven and AI-Powered Warfare with Katrina Manson✨ | AI warfareProject Maven+4 | Katrina Manson | Project MavenProject Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare | UkraineRussia+1 | Project MavenAI warfare+6 | — | 1h 01m 29s | |
| 3/24/26 | Trump's National AI Framework and Super Micro's Chip Smuggling Indictment✨ | AI legislationchip smuggling+3 | — | Super MicroNvidia+1 | — | AI legislationTrump+5 | — | 1h 02m 41s | |
| 3/11/26 | Anthropic Goes to Court While Claude Goes to War in Iran✨ | legal issuesAI technology+3 | — | ClaudeAnthropic+2 | Iran | AnthropicPentagon+5 | — | 1h 03m 17s | |
| 3/6/26 | A Crash Course on AI Standards with Google DeepMind's Owen Larter✨ | AI standardsAI governance+3 | Owen Larter | Google DeepMindCenter for Strategic and International Studies+1 | — | AI standardsGoogle DeepMind+3 | — | 38m 44s | |
| 3/3/26 | Andreessen Horowitz's Jai Ramaswamy, Matt Perault: AI Regulation & Innovation✨ | AI regulationinnovation+3 | Jai RamaswamyMatt Perault | Andreessen HorowitzcLabs+8 | — | AIregulation+3 | — | 1h 10m 20s | |
| 2/25/26 | Inside Anthropic's Standoff with the Pentagon and What It Means for Military AI✨ | military AIAI policy+4 | — | Blackwell chipsfrontier model distillation+4 | — | AnthropicPentagon+5 | — | 1h 00m 07s | |
| 2/20/26 | Live from New Delhi: Our Takeaways from the India AI Impact Summit✨ | AI impactIndia+3 | — | Center for Strategic and International Studies | New Delhi | AIIndia+5 | — | 50m 50s | |
| 2/10/26 | Inside The Second International AI Safety Report with Writers Stephen Clare and Stephen Casper✨ | AI safetytechnical safeguards+3 | Stephen ClareStephen Casper | Wadhwani AI CenterMIT+2 | — | AI safety reporttechnical safeguards+3 | — | 1h 33m 54s | |
Want analysis for the episodes below?Free for Pro Submit a request, we'll have your selected episodes analyzed within an hour. Free, at no cost to you, for Pro users. | |||||||||
| 2/5/26 | ![]() Jennifer Pahlka on Reforming Government for the AI Era | In this special episode recorded at Fathom’s 2026 Ashby Workshops, Greg sits down with Jennifer Pahlka, founder of Code for America and author of Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better. Jennifer walks us through her career journey, from filing paperwork at a child welfare agency to helping pioneer the U.S. Digital Services in the Obama administration (3:45). She describes the need for upstream policy reform (11:29), and discusses AI’s potential to both empower public servants to challenge antiquated practices and help policymakers simplify complex regulations (28:03). Finally, Jennifer shares some AI use cases she’s particularly excited about in government (59:34). Jennifer Pahlka is a senior fellow at the Niskanen Center and the Federation of American Scientists and a senior advisor at the Abundance Network. She previously served as U.S. Deputy Chief Technology Officer, helping start the U.S. Digital Services under the second Obama administration, and as a member of the Defense Innovation Network. Read Jennifer’s book Recoding America and check out her Substack Eating Policy. Jennifer’s recommended reading: Hack Your Bureaucracy by Marina Nitze & Nick Sinai Crisis Engineering by Marina Nitze, Matthew Weaver, & Mikey Dickerson The Procedure Fetish by Nicholas Bagley Why Nothing Works by Marc J. Dunkelman Kill It with Fire by Marianne Bellotti | 1h 10m 00s | ||||||
| 1/30/26 | ![]() The Indian and French Ambassadors to the US on Global AI Summits | This episode cross-posts a fireside chat with the Ambassadors of India and France to the United States, Amb. Vinay Kwatra and Amb. Laurent Bili. The discussion was recorded at the Wadhwani AI Center’s January 30 conference, “Exploring Global AI Policy Priorities Ahead of the India AI Impact Summit.” A full recording of the conference, including additional panels and speakers, can be found here. | 36m 22s | ||||||
| 1/22/26 | ![]() The Future of Nvidia’s H200 in China and the Pentagon's New AI Strategy | In this episode, we discuss and evaluate the BIS' new export policy for Nvidia's H200 chips (00:31) before turning to Beijing's decision to block H200 imports (20:18). We then unpack the Pentagon's recently published AI Strategy, including the shift it represents in DOW's approach to AI integration (29:17). Read the CNAS commentary "Unpacking the H200 Export Policy" here. | 59m 36s | ||||||
| 1/9/26 | ![]() xAI's Latest Controversy and New York's New AI Safety Bill | In this episode, we examine Grok’s public posting of child sexual abuse material and non-consensual intimate imagery (00:27), the legal consequences xAI may face (12:41), and the international policy community's response (19:05). We then unpack New York’s RAISE Act, including the politics leading up to Gov. Hochul’s signature (22:51) and the final outcome of negotiations (28:16). | 43m 50s | ||||||
| 1/6/26 | ![]() China's EUV Manhattan Project and Export Control Mythbusting with Chris McGuire | In this episode, we're joined by Chris McGuire for a conversation about AI and semiconductor export controls. We begin by discussing Chris' career path into AI and national security (1:55), then turn to his views on recent developments, including reports about a Chinese EUV prototype (11:07). We spend the rest of the episode rating common arguments against AI export controls as fact, fiction, or somewhere in-between (40:25). Chris is a Senior Fellow for China and Emerging Technologies at the Council on Foreign Relations and a leading expert on U.S.-China AI competition. Before joining CFR, he served as a career government official for over a decade, including as Deputy Senior Director for Technology and National Security at the National Security Council (NSC) from 2022 to 2024. Links to some of Chris' recent work, as discussed in the podcast, are included below. China’s AI Chip Deficit: Why Huawei Can’t Catch Nvidia and U.S. Export Controls Should Remain Testimony on Strengthening Export Controls on Semiconductor Manufacturing Equipment | 1h 28m 28s | ||||||
| 12/29/25 | ![]() Previewing India's AI Impact Summit with MeitY Secretary S. Krishnan | Since 2023, a series of global AI summits has brought together world leaders to advance international dialogue and cooperation on artificial intelligence. Building on this momentum, Prime Minister Narendra Modi announced the India AI Impact Summit, which will take place in New Delhi in February 2026. As the first summit in the series to be hosted in a Global South country, the AI Impact Summit aims to amplify Global South perspectives and advance concrete action to address both the opportunities and risks of AI. On December 8, 2025, the CSIS Wadhwani AI Center hosted S. Krishnan, Secretary of India’s Ministry of Electronics and Information Technology (MeitY), for a livestreamed fireside chat with Wadhwani AI Center Senior Adviser Gregory C. Allen. Secretary Krishnan, who leads India’s national AI strategy, will outline India’s policy priorities and share insights into the goals and global aspirations shaping the upcoming AI Impact Summit. He will also offer a comprehensive look at the central role MeitY plays in driving innovation across India’s AI ecosystem. Secretary Krishnan brings more than 35 years of experience in public service, having joined the Indian Administrative Service in 1989. Prior to his current role, he served as the Additional Chief Secretary of the Industries, Investment Promotion and Commerce Department in the Government of Tamil Nadu. He has also served as Senior Advisor in the Office of the Executive Director for India, Sri Lanka, Bangladesh, and Bhutan at the International Monetary Fund, and has represented India in the G20 Expert Groups on International Financial Architecture and Global Financial Safety Nets. Secretary Krishnan holds a bachelor’s degree from St. Stephen’s College in Delhi. | 54m 32s | ||||||
| 12/18/25 | ![]() Trump Signs EO Targeting State AI Laws While Meta Showcases Risks of Weak Tech Regulation | In this episode, we unpack President Trump’s new executive order targeting state AI laws, including how the final version compares to an earlier draft (1:26), and the legal and political challenges it is likely to face (14:46). We then discuss recent Reuters reporting on Meta’s reliance on scam-driven ad revenue (22:12) and what the social media experience suggests about the risks of failing to regulate AI (45:21). | 56m 33s | ||||||
| 12/9/25 | ![]() White House Greenlights H200 Exports, DOE Unveils Genesis Mission, and Insurers Move to Limit AI Coverage | In this episode, we break down the White House’s decision to let Nvidia’s H200 chips be exported to China and Greg’s case against the move (00:33). We then discuss Trump’s planned “One Rule” executive order to preempt state AI laws (18:59), examine the NDAA's proposed AI Futures Steering Committee (23:09), and analyze the Genesis Mission executive order (26:07), comparing its ambitions and funding reality to the Manhattan Project and Apollo program. We close by looking at why major insurers are seeking to exclude AI risks from corporate policies and how that could impact AI adoption, regulation, and governance (40:29). | 52m 55s | ||||||
| 11/21/25 | ![]() Trump’s Draft AI Preemption Order, EU AI Act Delays, and Anthropic's Cyberattack Report | In this episode, we start by discussing Greg's trip to India and the upcoming India AI Impact Summit in February 2026 (00:29). We then unpack the Trump Administration’s draft executive order to preempt state AI laws (07:46) and break down the European Commission’s new “digital omnibus” package, including proposed adjustments to the AI Act and broader regulatory simplification efforts (17:51). Finally, we discuss Anthropic’s report on a China-backed “highly sophisticated cyber espionage campaign" using Claude and the mixed reactions from cybersecurity and AI policy experts (37:37). | 54m 26s | ||||||
| 11/5/25 | ![]() What Selling Nvidia's Blackwell Chips to China Would Mean for the AI Race | In this episode, Georgia Adamson and Saif Khan from the Institute for Progress join Greg to unpack their October 25 paper, "Should the US Sell Blackwell Chips to China?" They discuss the geopolitical context of the paper (3:26), how the rumored B30A would compare to other advanced AI chips (11:37), and the potential consequences if the US were to permit B30A exports to China (32:00). Their paper is available here. | 1h 04m 29s | ||||||
| 10/30/25 | ![]() How to Build a Career in AI Policy | One of the most common questions we get from listeners is how to build a successful career in AI policy—so we dedicated an entire episode to answering it. We cover the most formative experiences from Greg's career journey (3:30), general principles for professional success (45:09), and actionable tips specific to breaking into the AI policy space (1:11:52). | 1h 52m 20s | ||||||
| 10/23/25 | ![]() Sora 2 and the Deepfake Boom | In this episode, we cover OpenAI’s latest video-generation model Sora 2 (1:02), concrete harms and potential risks from deepfakes (5:18), the underlying technology and its history (27:03), and how policy can mitigate harms (36:31). | 1h 01m 03s | ||||||
| 10/21/25 | ![]() Congressman Jay Obernolte on the Future of U.S. AI Regulation | In this episode, we are joined by Rep. Jay Obernolte, one of Congress’s leading voices on AI policy. We discuss his path from developing video games to serving in Congress (00:49), the work of the bipartisan House Task Force on AI and its final report (9:39), competing approaches to designing AI regulation in Congress (16:38), and prospects for federal preemption of state AI legislation (40:32). Congressman Obernolte has represented California’s 23rd district since 2021. He co-chaired the bipartisan House Task Force on AI, leading the development of an extensive December 2024 report outlining a congressional agenda for AI. He also serves as vice-chair of the Congressional AI Caucus and is the only current member of Congress with an advanced degree in Artificial Intelligence, which he earned from UCLA in 1997. Rep. Obernolte previously served in the California State Legislature. | 58m 34s | ||||||
| 10/17/25 | ![]() The Impact of AI on Labor with Harry Holzer | In this episode, we are joined by economist Harry Holzer to discuss how AI is set to transform labor. Holzer was Chief Economist at the U.S. Department of Labor during the Clinton administration and is currently a Professor of Public Policy at Georgetown University. We break down the fundamentals of the labor market (4:00) and the current and future impact of AI automation (10:30). Holzer also reacts to Anthropic CEO Dario Amodei's warning that AI could eliminate half of entry-level white-collar jobs (23:32) and explains why we need better data capturing AI's impact on the labor market (52:53). Harry Holzer recently co-authored a white paper titled "Proactively Developing & Assisting the Workforce in the Age of AI," which is available here. | 1h 07m 10s | ||||||
| 10/9/25 | ![]() What California's SB 53 Means for AI Safety | In this episode, we dive into California's new AI transparency law, SB 53. We explore the bill's history (00:30), contrast it with the more controversial SB 1047 (6:43), break down the specific disclosure requirements for AI labs of different scales (13:38), and discuss how industry stakeholders and policy experts have responded to the legislation (29:47). | 37m 22s | ||||||
Showing 25 of 92
Sponsor Intelligence
Sign in to see which brands sponsor this podcast, their ad offers, and promo codes.
Chart Positions
21 placements across 21 markets.
Chart Positions
21 placements across 21 markets.


