
Future of Life Institute Podcast
by Future of Life Institute
Is this your podcast?The Future of Life Institute is a nonprofit organization dedicated to mitigating global catastrophic and existential risks associated with advanced technologies. Renowned for its advocacy and educational outreach, FLI addresses critical iss…
Insights from recent episode analysis
Audience Interest
- artificial intelligence governance
- biotechnology risks
Podcast Focus
- reducing global catastrophic risks
- advocacy for technology governance
Publishing Consistency
- 10 years active
- weekly or more episodes
Platform Reach
- specific platforms not detected
- unknown total followers
Insights are generated by CastFox AI using publicly available data, episode content, and proprietary models.
Most discussed topics
Brands & references
Total monthly reach
Estimated from 6 chart positions in 6 markets.
By chart position
- 🇸🇪SE · Technology#1361K to 10K
- 🇮🇳IN · Technology#1411K to 10K
- 🇳🇿NZ · Technology#513K to 10K
- 🇷🇴RO · Technology#553K to 10K
- 🇨🇿CZ · Technology#178500 to 3K
- Per-Episode Audience
Est. listeners per new episode within ~30 days
4.5K to 23K🎙 ~2x weekly·264 episodes·Last published 4d ago - Monthly Reach
Unique listeners across all episodes (30 days)
9K to 46K🇸🇪22%🇮🇳22%🇳🇿22%+3 more - Active Followers
Loyal subscribers who consistently listen
5.0K to 25K96K real followers tracked across platforms
Market Insights
Platform Distribution
Reach across major podcast platforms, updated hourly
Total Followers
—
Total Plays
—
Total Reviews
—
* Data sourced directly from platform APIs and aggregated hourly across all major podcast directories.
On the show
From 12 epsHosts
Recent guests
Recent episodes
Why We Should Build AI Tools, Not AI Replacements (with Anthony Aguirre)
May 11, 2026
1h 36m 12s
How to Govern AI When You Can't Predict the Future (with Charlie Bullock)
May 7, 2026
1h 07m 11s
Why AI Is Not a Normal Technology (with Peter Wildeford)
Apr 29, 2026
1h 24m 00s
Why AI Evaluation Science Can't Keep Up (with Carina Prunkl)
Apr 17, 2026
54m 23s
Defense in Depth: Layered Strategies Against AI Risk (with Li-Lian Ang)
Apr 2, 2026
55m 47s
Social Links & Contact
Official channels & resources
Official Website
Login
RSS Feed
Login
| Date | Episode | Topics | Guests | Brands | Places | Keywords | Sponsor | Length | |
|---|---|---|---|---|---|---|---|---|---|
| 5/11/26 | ![]() Why We Should Build AI Tools, Not AI Replacements (with Anthony Aguirre) | Anthony Aguirre is the CEO of the Future of Life Institute. He joins the podcast to discuss A Better Path for AI, his essay series on steering AI away from races to replace people. The conversation covers races for attention, attachment, automation, and superintelligence, and how these can concentrate power and undermine human agency. Anthony argues for purpose-built AI tools under meaningful human control, with liability, access limits, external guardrails, and international cooperation.LINKS:A Better Path for AIWhat You Can DoCHAPTERS: (00:00) Episode Preview (01:03) Attention, attachment, automation (13:58) Superintelligence power race (26:39) Escaping replacement dynamics (40:15) Pro-human tool AI (53:30) Guardrails and verification (01:03:24) Defining pro-human AI (01:10:37) Agents and accountability (01:17:28) International AI cooperation (01:25:28) Rethinking AI alignment (01:32:43) Optimism and action PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://podcast.futureoflife.org Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP | 1h 36m 12s | ||||||
| 5/7/26 | ![]() How to Govern AI When You Can't Predict the Future (with Charlie Bullock)✨ | AI governanceradical optionality+4 | Charlie Bullock | Institute for Law and AIFuture of Life Institute | — | AIgovernance+5 | — | 1h 07m 11s | |
| 4/29/26 | ![]() Why AI Is Not a Normal Technology (with Peter Wildeford)✨ | AI forecastingeconomic implications of AI+4 | Peter Wildeford | AI Policy NetworkFuture of Life Institute | — | AIforecasting+5 | — | 1h 24m 00s | |
| 4/17/26 | ![]() Why AI Evaluation Science Can't Keep Up (with Carina Prunkl)✨ | AI evaluationgeneral-purpose AI+4 | Carina Prunkl | InriaFuture of Life Institute | — | AI evaluationgeneral-purpose AI+4 | — | 54m 23s | |
| 4/2/26 | ![]() Defense in Depth: Layered Strategies Against AI Risk (with Li-Lian Ang)✨ | AI riskworkforce development+4 | Li-Lian Ang | Blue Dot ImpactFuture of Life Institute | — | AI risksengineered pandemics+4 | — | 55m 47s | |
| 3/20/26 | ![]() What AI Companies Get Wrong About Curing Cancer (with Emilia Javorsky)✨ | AI and cancerdrug discovery+3 | Emilia Javorsky | Future of Life InstituteFDA+1 | — | AIcancer+4 | — | 1h 12m 10s | |
| 3/16/26 | ![]() AI vs Cancer - How AI Can, and Can't, Cure Cancer (by Emilia Javorsky)✨ | AIcancer research+3 | Emilia Javorsky | Future of Life Institutecurecancer.ai+2 | — | AIcancer+5 | — | 2h 43m 13s | |
| 3/5/26 | ![]() How AI Hacks Your Brain's Attachment System (with Zak Stein)✨ | AI and psychologyattachment systems+4 | Zak Stein | Future of Life InstituteAI Psychological Harms Research Coalition+1 | — | AI companionscognitive atrophy+4 | — | 1h 44m 40s | |
| 2/20/26 | ![]() The Case for a Global Ban on Superintelligence (with Andrea Miotti)✨ | superintelligenceAI regulation+3 | Andrea Miotti | Control AIFuture of Life Institute | — | superintelligenceAI risks+3 | — | 1h 07m 10s | |
| 2/6/26 | ![]() Can AI Do Our Alignment Homework? (with Ryan Kidd)✨ | AGI timelinesmodel deception risks+3 | Ryan Kidd | MATSFuture of Life Institute | — | AGIAI safety+4 | — | 1h 46m 33s | |
Want analysis for the episodes below?Free for Pro Submit a request, we'll have your selected episodes analyzed within an hour. Free, at no cost to you, for Pro users. | |||||||||
| 1/27/26 | ![]() How to Rebuild the Social Contract After AGI (with Deric Cheng)✨ | social contractAGI+5 | Deric Cheng | Windfall TrustFuture of Life Institute+2 | — | AGIsocial contract+5 | — | 1h 04m 39s | |
| 1/20/26 | ![]() How AI Can Help Humanity Reason Better (with Oly Sourbut)✨ | AIhuman reasoning+4 | Oly Sourbut | Future of Life FoundationFuture of Life Institute | — | AIhuman reasoning+4 | — | 1h 17m 33s | |
| 1/7/26 | ![]() How to Avoid Two AI Catastrophes: Domination and Chaos (with Nora Ammann)✨ | AI risksslow AI takeoff+5 | Nora Ammann | Advanced Research and Invention AgencyAI Resilience+2 | — | AIdomination+5 | — | 1h 20m 01s | |
| 12/23/25 | ![]() How Humans Could Lose Power Without an AI Takeover (with David Duvenaud) | David Duvenaud is an associate professor of computer science and statistics at the University of Toronto. He joins the podcast to discuss gradual disempowerment in a post-AGI world. We ask how humans could lose economic and political leverage without a sudden takeover, including how property rights could erode. Duvenaud describes how growth incentives shape culture, why aligning AI to humanity may become unpopular, and what better forecasting and governance might require.LINKS:David Duvenaud academic homepageGradual DisempowermentThe Post-AGI WorkshopPost-AGI Studies DiscordCHAPTERS:(00:00) Episode Preview(01:05) Introducing gradual disempowerment(06:06) Obsolete labor and UBI(14:29) Property, power, and control(23:38) Culture shifts toward AIs(34:34) States misalign without people(44:15) Competition and preservation tradeoffs(53:03) Building post-AGI studies(01:02:29) Forecasting and coordination tools(01:10:26) Human values and futuresPRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP | 1h 18m 34s | ||||||
| 12/12/25 | ![]() Why the AI Race Undermines Safety (with Steven Adler) | Stephen Adler is a former safety researcher at OpenAI. He joins the podcast to discuss how to govern increasingly capable AI systems. The conversation covers competitive races between AI companies, limits of current testing and alignment, mental health harms from chatbots, economic shifts from AI labor, and what international rules and audits might be needed before training superintelligent models. LINKS:Steven Adler's Substack: https://stevenadler.substack.comCHAPTERS:(00:00) Episode Preview(01:00) Race Dynamics And Safety(18:03) Chatbots And Mental Health(30:42) Models Outsmart Safety Tests(41:01) AI Swarms And Work(54:21) Human Bottlenecks And Oversight(01:06:23) Animals And Superintelligence(01:19:24) Safety Capabilities And GovernancePRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP | 1h 28m 45s | ||||||
| 11/27/25 | ![]() Why OpenAI Is Trying to Silence Its Critics (with Tyler Johnston) | Tyler Johnston is Executive Director of the Midas Project. He joins the podcast to discuss AI transparency and accountability. We explore applying animal rights watchdog tactics to AI companies, the OpenAI Files investigation, and OpenAI's subpoenas against nonprofit critics. Tyler discusses why transparency is crucial when technical safety solutions remain elusive and how public pressure can effectively challenge much larger companies.LINKS:The Midas Project WebsiteTyler Johnston's LinkedIn ProfileCHAPTERS:(00:00) Episode Preview(01:06) Introducing the Midas Project(05:01) Shining a Light on AI(08:36) Industry Lockdown and Transparency(13:45) The OpenAI Files(20:55) Subpoenaed by OpenAI(29:10) Responding to the Subpoena(37:41) The Case for Transparency(44:30) Pricing Risk and Regulation(52:15) Measuring Transparency and Auditing(57:50) Hope for the FuturePRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP | 1h 01m 20s | ||||||
| 11/14/25 | ![]() We're Not Ready for AGI (with Will MacAskill) | William MacAskill is a senior research fellow at Forethought. He joins the podcast to discuss his Better Futures essay series. We explore moral error risks, AI character design, space governance, and persistent path dependence. The conversation also covers risk-averse AI systems, moral trade between value systems, and improving model specifications for ethical reasoning.LINKS:- Better Futures Research Series: https://www.forethought.org/research/better-futures- William MacAskill Forethought Profile: https://www.forethought.org/people/william-macaskillCHAPTERS:(00:00) Episode Preview(01:03) Improving The Future's Quality(09:58) Moral Errors and AI Rights(18:24) AI's Impact on Thinking(27:17) Utopias and Population Ethics(36:41) The Danger of Moral Lock-in(44:38) Deals with Misaligned AI(57:25) AI and Moral Trade(01:08:21) Improving AI Ethical Reasoning(01:16:05) The Risk of Path Dependence(01:27:41) Avoiding Future Lock-in(01:36:22) The Urgency of Space Governance(01:46:19) A Future Research Agenda(01:57:36) Is Intelligence a Good Bet?PRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP | 2h 03m 08s | ||||||
| 11/7/25 | ![]() What Happens When Insiders Sound the Alarm on AI? (with Karl Koch) | Karl Koch is founder of the AI Whistleblower Initiative. He joins the podcast to discuss transparency and protections for AI insiders who spot safety risks. We explore current company policies, legal gaps, how to evaluate disclosure decisions, and whistleblowing as a backstop when oversight fails. The conversation covers practical guidance for potential whistleblowers and challenges of maintaining transparency as AI development accelerates.LINKS:About the AI Whistleblower InitiativeKarl KochPRODUCED BY:https://aipodcast.ingCHAPTERS:(00:00) Episode Preview(00:55) Starting the Whistleblower Initiative(05:43) Current State of Protections(13:04) Path to Optimal Policies(23:28) A Whistleblower's First Steps(32:29) Life After Whistleblowing(39:24) Evaluating Company Policies(48:19) Alternatives to Whistleblowing(55:24) High-Stakes Future Scenarios(01:02:27) AI and National SecuritySOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyPDISCLAIMERS: - AIWI does not request, encourage or counsel potential whistleblowers or listeners of this podcast to act unlawfully. - This is not legal advice and if you, the listener, find yourself needing legal counsel, please visit https://aiwi.org/contact-hub/ for detailed profiles of the world's leading whistleblower support organizations. | 1h 08m 16s | ||||||
| 10/24/25 | ![]() Can Machines Be Truly Creative? (with Maya Ackerman) | Maya Ackerman is an AI researcher, co-founder and CEO of WaveAI, and author of the book "Creative Machines: AI, Art & Us." She joins the podcast to discuss creativity in humans and machines. We explore defining creativity as novel and valuable output, why evolution qualifies as creative, and how AI alignment can reduce machine creativity. The conversation covers humble creative machines versus all-knowing oracles, hallucination's role in thought, and human-AI collaboration strategies that elevate rather than replace human capabilities.LINKS:- Maya Ackerman: https://en.wikipedia.org/wiki/Maya_Ackerman- Creative Machines: AI, Art & Us: https://maya-ackerman.com/creative-machines-book/PRODUCED BY:https://aipodcast.ingCHAPTERS:(00:00) Episode Preview(01:00) Defining Human Creativity(02:58) Machine and AI Creativity(06:25) Measuring Subjective Creativity(10:07) Creativity in Animals(13:43) Alignment Damages Creativity(19:09) Creativity is Hallucination(26:13) Humble Creative Machines(30:50) Incentives and Replacement(40:36) Analogies for the Future(43:57) Collaborating with AI(52:20) Reinforcement Learning & Slop(55:59) AI in EducationSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP | 1h 01m 51s | ||||||
| 10/14/25 | ![]() From Research Labs to Product Companies: AI's Transformation (with Parmy Olson) | Parmy Olson is a technology columnist at Bloomberg and the author of Supremacy, which won the 2024 Financial Times Business Book of the Year. She joins the podcast to discuss the transformation of AI companies from research labs to product businesses. We explore how funding pressures have changed company missions, the role of personalities versus innovation, the challenges faced by safety teams, and power consolidation in the industry.LINKS:- Parmy Olson on X (Twitter): https://x.com/parmy- Parmy Olson’s Bloomberg columns: https://www.bloomberg.com/opinion/authors/AVYbUyZve-8/parmy-olson- Supremacy (book): https://www.panmacmillan.com/authors/parmy-olson/supremacy/9781035038244PRODUCED BY:https://aipodcast.ingCHAPTERS:(00:00) Episode Preview(01:18) Introducing Parmy Olson(02:37) Personalities Driving AI(06:45) From Research to Products(12:45) Has the Mission Changed?(19:43) The Role of Regulators(21:44) Skepticism of AI Utopia(28:00) The Human Cost(33:48) Embracing Controversy(40:51) The Role of Journalism(41:40) Big Tech's InfluenceSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP | 46m 37s | ||||||
| 10/3/25 | ![]() Can Defense in Depth Work for AI? (with Adam Gleave) | Adam Gleave is co-founder and CEO of FAR.AI. In this cross-post from The Cognitive Revolution Podcast, he joins to discuss post-AGI scenarios and AI safety challenges. The conversation explores his three-tier framework for AI capabilities, gradual disempowerment concerns, defense-in-depth security, and research on training less deceptive models. Topics include timelines, interpretability limitations, scalable oversight techniques, and FAR.AI’s vertically integrated approach spanning technical research, policy advocacy, and field-building.LINKS:Adam Gleave - https://www.gleave.meFAR.AI - https://www.far.aiThe Cognitive Revolution Podcast - https://www.cognitiverevolution.aiPRODUCED BY:https://aipodcast.ingCHAPTERS:(00:00) A Positive Post-AGI Vision(10:07) Surviving Gradual Disempowerment(16:34) Defining Powerful AIs(27:02) Solving Continual Learning(35:49) The Just-in-Time Safety Problem(42:14) Can Defense-in-Depth Work?(49:18) Fixing Alignment Problems(58:03) Safer Training Formulas(01:02:24) The Role of Interpretability(01:09:25) FAR.AI's Vertically Integrated Approach(01:14:14) Hiring at FAR.AI(01:16:02) The Future of GovernanceSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP | 1h 18m 35s | ||||||
| 9/26/25 | ![]() How We Keep Humans in Control of AI (with Beatrice Erkers) | Beatrice works at the Foresight Institute running their Existential Hope program. She joins the podcast to discuss the AI pathways project, which explores two alternative scenarios to the default race toward AGI. We examine tool AI, which prioritizes human oversight and democratic control, and d/acc, which emphasizes decentralized, defensive development. The conversation covers trade-offs between safety and speed, how these pathways could be combined, and what different stakeholders can do to steer toward more positive AI futures.LINKS:AI Pathways - https://ai-pathways.existentialhope.comBeatrice Erkers - https://www.existentialhope.com/team/beatrice-erkersCHAPTERS:(00:00) Episode Preview(01:10) Introduction and Background(05:40) AI Pathways Project(11:10) Defining Tool AI(17:40) Tool AI Benefits(23:10) D/acc Pathway Explained(29:10) Decentralization Trade-offs(35:10) Combining Both Pathways(40:10) Uncertainties and Concerns(45:10) Future Evolution(01:01:21) Funding PilotsPRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP | 1h 06m 45s | ||||||
| 9/18/25 | ![]() Why Building Superintelligence Means Human Extinction (with Nate Soares) | Nate Soares is president of the Machine Intelligence Research Institute. He joins the podcast to discuss his new book "If Anyone Builds It, Everyone Dies," co-authored with Eliezer Yudkowsky. We explore why current AI systems are "grown not crafted," making them unpredictable and difficult to control. The conversation covers threshold effects in intelligence, why computer security analogies suggest AI alignment is currently nearly impossible, and why we don't get retries with superintelligence. Soares argues for an international ban on AI research toward superintelligence.LINKS:If Anyone Builds It, Everyone Dies - https://ifanyonebuildsit.comMachine Intelligence Research Institute - https://intelligence.orgNate Soares - https://intelligence.org/team/nate-soares/PRODUCED BY:https://aipodcast.ingCHAPTERS:(00:00) Episode Preview(01:05) Introduction and Book Discussion(03:34) Psychology of AI Alarmism(07:52) Intelligence Threshold Effects(11:38) Growing vs Crafting AI(18:23) Illusion of AI Control(26:45) Why Iteration Won't Work(34:35) The No Retries Problem(38:22) Computer Security Lessons(49:13) The Cursed Problem(59:32) Multiple Curses and Complications(01:09:44) AI's Infrastructure Advantage(01:16:26) Grading Humanity's Response(01:22:55) Time Needed for Solutions(01:32:07) International Ban NecessitySOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP | 1h 39m 38s | ||||||
| 9/10/25 | ![]() Breaking the Intelligence Curse (with Luke Drago) | Luke Drago is the co-founder of Workshop Labs and co-author of the essay series "The Intelligence Curse". The essay series explores what happens if AI becomes the dominant factor of production thereby reducing incentives to invest in people. We explore pyramid replacement in firms, economic warning signs to monitor, automation barriers like tacit knowledge, privacy risks in AI training, and tensions between centralized AI safety and democratization. Luke discusses Workshop Labs' privacy-preserving approach and advises taking career risks during this technological transition. "The Intelligence Curse" essay series by Luke Drago & Rudolf Laine: https://intelligence-curse.ai/ Luke's Substack: https://lukedrago.substack.com/ Workshop Labs: https://workshoplabs.ai/ CHAPTERS: (00:00) Episode Preview (00:55) Intelligence Curse Introduction (02:55) AI vs Historical Technology (07:22) Economic Metrics and Indicators (11:23) Pyramid Replacement Theory (17:28) Human Judgment and Taste (22:25) Data Privacy and Control (28:55) Dystopian Economic Scenario (35:04) Resource Curse Lessons (39:57) Culture vs Economic Forces (47:15) Open Source AI Debate (54:37) Corporate Mission Evolution (59:07) AI Alignment and Loyalty (01:05:56) Moonshots and Career Advice | 1h 09m 38s | ||||||
| 9/1/25 | ![]() What Markets Tell Us About AI Timelines (with Basil Halperin) | Basil Halperin is an assistant professor of economics at the University of Virginia. He joins the podcast to discuss what economic indicators reveal about AI timelines. We explore why interest rates might rise if markets expect transformative AI, the gap between strong AI benchmarks and limited economic effects, and bottlenecks to AI-driven growth. We also cover market efficiency, automated AI research, and how financial markets may signal progress. * Basil's essay on "Transformative AI, existential risk, and real interest rates": https://basilhalperin.com/papers/agi_emh.pdf * Read more about Basil's work here: https://basilhalperin.com/ CHAPTERS: (00:00) Episode Preview (00:49) Introduction and Background (05:19) Efficient Market Hypothesis Explained (10:34) Markets and Low Probability Events (16:09) Information Diffusion on Wall Street (24:34) Stock Prices vs Interest Rates (28:47) New Goods Counter-Argument (40:41) Why Focus on Interest Rates (45:00) AI Secrecy and Market Efficiency (50:52) Short Timeline Disagreements (55:13) Wealth Concentration Effects (01:01:55) Alternative Economic Indicators (01:12:47) Benchmarks vs Economic Impact (01:25:17) Open Research Questions SOCIAL LINKS: Website: https://future-of-life-institute-podcast.aipodcast.ing [http://future-of-life-institute-podcast.aipodcast.ing/] Twitter (FLI): https://x.com/FLI_org Twitter (Gus): https://x.com/gusdocker LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ Apple Podcasts: https://geo.itunes.apple.com/us/podcast/id1170991978 Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP PRODUCED BY: https://aipodcast.ing [https://aipodcast.ing/] | 1h 36m 10s | ||||||
Showing 25 of 267
Sponsor Intelligence
Sign in to see which brands sponsor this podcast, their ad offers, and promo codes.
Similar Audience Demographics
Podcasts that attract a similar listener profile
Chart Positions
6 placements across 6 markets.
Chart Positions
6 placements across 6 markets.

























