Insights from recent episode analysis
Audience Interest
Podcast Focus
Publishing Consistency
Platform Reach
Insights are generated by CastFox AI using publicly available data, episode content, and proprietary models.
Most discussed topics
Brands & references
Est. Listeners
Based on iTunes & Spotify (publisher stats).
- Per-Episode Audience
Est. listeners per new episode within ~30 days
1,001 - 10,000 - Monthly Reach
Unique listeners across all episodes (30 days)
5,001 - 25,000 - Active Followers
Loyal subscribers who consistently listen
5,001 - 15,000
Market Insights
Platform Distribution
Reach across major podcast platforms, updated hourly
Total Followers
—
Total Plays
—
Total Reviews
—
* Data sourced directly from platform APIs and aggregated hourly across all major podcast directories.
On the show
From 10 epsHosts
Not detected.
Recent guests
Recent episodes
#84 – Dean Spears on the Case for People
Nov 1, 2025
1h 43m 24s
#83 – Max Smeets on Barriers To Cyberweapons
Mar 13, 2025
1h 36m 19s
#82 – Tom Kalil on Institutions for Innovation (with Matt Clancy)
Dec 14, 2024
1h 17m 37s
#81 – Cynthia Schuck on Quantifying Animal Welfare
Nov 21, 2024
1h 37m 16s
#80 – Dan Williams on How Persuasion Works
Oct 26, 2024
1h 48m 43s
Social Links & Contact
Official channels & resources
Official Website
Login
RSS Feed
Login
| Date | Episode | Topics | Guests | Brands | Places | Keywords | Sponsor | Length | |
|---|---|---|---|---|---|---|---|---|---|
| 11/1/25 | #84 – Dean Spears on the Case for People✨ | Economic DemographyDevelopment Economics+2 | Dean Spears | After the Spikethe University of Texas at Austin+1 | — | After the SpikeUniversity of Texas at Austin+1 | — | 1h 43m 24s | |
| 3/13/25 | #83 – Max Smeets on Barriers To Cyberweapons✨ | cyber operationsinternational norms+2 | Max Smeets | ETH Zurich'sCenter for Security Studies | — | cybersecuritynation state+2 | — | 1h 36m 19s | |
| 12/14/24 | #82 – Tom Kalil on Institutions for Innovation (with Matt Clancy)✨ | innovationphilanthropy+2 | Tom KalilMatt Clancy | Renaissance PhilanthropySchmidt Futures+6 | US | influence without authorityinnovation prizes+2 | — | 1h 17m 37s | |
| 11/21/24 | #81 – Cynthia Schuck on Quantifying Animal Welfare✨ | animal welfarequantification+6 | Cynthia Schuck-Paim | the Welfare Footprint Project | — | Welfare Footprint Projectanimal experiences+2 | — | 1h 37m 16s | |
| 10/26/24 | #80 – Dan Williams on How Persuasion Works✨ | persuasionreasoning+4 | Dan Williams | the University of Sussexthe Leverhulme Centre for the Future of Intelligence+2 | — | mind virusesluxury beliefs+3 | — | 1h 48m 43s | |
| 9/14/24 | #79 – Tamay Besiroglu on Explosive Growth from AI✨ | AIeconomics+4 | Tamay Besiroglu | Epoch AI | — | explosive growthincreasing returns to scale+3 | — | 2h 09m 19s | |
| 9/8/24 | #78 – Jacob Trefethen on Global Health R&D✨ | global healthR&D+5 | Jacob Trefethen | Global Health R&D.Open Philanthropy’s+4 | — | health technologiesTB vaccine+3 | — | 2h 30m 16s | |
| 7/25/24 | #77 – Elizabeth Seger on Open Sourcing AI✨ | open source AIAI safety research+3 | Elizabeth Seger | Llama 3.1Demos+2 | UK | open sourceAI models+3 | — | 1h 20m 49s | |
| 3/16/24 | #76 – Joe Carlsmith on Scheming AI✨ | artificial intelligenceexistential risk+2 | Joe Carlsmith | Open Philanthropythe University of Oxford+2 | — | scheming AIdeceptive AI+2 | — | 1h 51m 32s | |
| 2/4/24 | #75 – Eric Schwitzgebel on Digital Consciousness and the Weirdness of the World✨ | digital consciousnessphilosophy of mind+4 | Eric Schwitzgebel | The Weirdness of the Worldthe University of California, Riverside+1 | — | moral mistakesoverlapping minds+1 | — | 1h 58m 50s | |
Want analysis for the episodes below?Free for Pro Submit a request, we'll have your selected episodes analyzed within an hour. Free, at no cost to you, for Pro users. | |||||||||
| 12/19/23 | ![]() #74 – Sonia Ben Ouagrham-Gormley on Barriers to Bioweapons | Sonia Ben Ouagrham-Gormley is an associate professor at George Mason University and Deputy Director of their Biodefence Programme In this episode we talk about: Where the belief that 'bioweapons are easy to make' came from and why it has been difficult to change Why transferring tacit knowledge is so difficult -- and the particular challenges that rogue actors face As well as lastly what Sonia makes of the AI-Bio risk discourse and what types of advances in technology would cause her concern You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening! | — | ||||||
| 11/24/23 | ![]() Bonus: 'How I Learned To Love Shrimp' & David Coman-Hidy | In this bonus episode we are sharing an episode by another podcast: How I Learned To Love Shrimp. It is co-hosted by Amy Odene and James Ozden, who together are "showcasing innovative and impactful ways to help animals". In this interview they speak to David Coman-Hidy, who is the former President of The Humane –League, one of the largest farm animal advocacy organisations in the world. He now works as a Partner at Sharpen Strategy working to coach animal advocacy organisations. | — | ||||||
| 11/22/23 | ![]() #73 – Michelle Lavery on the Science of Animal Welfare | Michelle Lavery is a Program Associate with Open Philanthropy’s Farm Animal Welfare team, with a focus on the science and study of animal behaviour & welfare. You can see more links and a full transcript at hearthisidea.com/episodes/lavery In this episode we talk about: How do scientists study animal emotions in the first place? How is a "science" of animal emotion even feasible? When is it useful to anthropomorphise animals to understand them? How can you study the preferences of animals? How can you measure the “strength” of preferences? How do farmed animal welfare advocates relate to animal welfare science? Are their perceptions fair? How can listeners get involved with the study of animal emotions? You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening! | — | ||||||
| 11/4/23 | ![]() #72 – Richard Bruns on Indoor Air Quality | Dr Richard Bruns is a Senior Scholar at the Johns Hopkins Center for Health Security, and before that was a Senior Economist at the US Food and Drug Administration (the FDA). In this episode we talk about the importance of indoor air quality (IAQ), and how to improve it. Including: Estimating the DALY cost of unclean indoor air from pathogens and particulate matter How much pandemic risk could be reduced from improving IAQ? How economists convert health losses into dollar figures — and how not to put a price on life Key interventions to improve IAQ Air filtration Germicidal UV light (especially Far-UVC light) Barriers to adoption, including UV smog and empirical studies needed most National and state-level policy changes to get these interventions adopted widely You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening! | — | ||||||
| 10/19/23 | ![]() #71 – Saloni Dattani on Malaria Vaccines and Missing Data in Global Health | Saloni Dattani is a Researcher at Our World in Data, and a founder & editor at the online magazine Works in Progress. She holds a PhD in psychiatric genetics from King’s College London. You can see more links and a full transcript at hearthisidea.com/episodes/dattani. In this episode we talk about: The history of malaria and attempts to eradicate it The role of DDT and insecticide spraying campaigns — and why they were scaled down Why we didn’t get a malaria vaccine sooner What comes after vaccine discovery — rolling out the RTS,S vaccine New funding models to accelerate similar life-saving research, like vaccines for TB and HIV Why so much global health data is missing, and why that matters How the ‘million deaths study’ revealed that about 50,000 deaths per year from snakebites in India went uncounted by health agencies You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening! | — | ||||||
| 9/20/23 | ![]() #70 – Liv Boeree on Healthy vs Unhealthy Competition | Liv Boeree is a former poker champion turned science communicator and podcaster, with a background in astrophysics. In 2014, she founded the nonprofit Raising for Effective Giving, which has raised more than $14 million for effective charities. Before retiring from professional poker in 2019, Liv was the Female Player of the Year for three years running. Currently she hosts the Win-Win podcast (you’ll enjoy it if you enjoy this podcast). You can see more links and a full transcript at hearthisidea.com/episodes/boeree. In this episode we talk about: Is the ‘poker mindset’ valuable? Is it learnable? How and why to bet on your beliefs — and whether there are outcomes you shouldn’t make bets on Would cities be better without public advertisements? What is Moloch, and why is it a useful abstraction? How do we escape multipolar traps? Why might advanced AI (not) act like profit-seeking companies? What’s so important about complexity? What is complexity, for that matter? You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening! | — | ||||||
| 8/31/23 | ![]() #69 – Jon Y (Asianometry) on Problems And Progress in Semiconductor Manufacturing | Jon Y is the creator of the Asianometry YouTube channel and accompanying newsletter. He describes his channel as making "video essays on business, economics, and history. Sometimes about Asia, but not always." You can see more links and a full transcript at hearthisidea.com/episodes/asianometry In this episode we talk about: Compute trends driving recent progress in Artificial Intelligence; The semiconductor supply chain and its geopolitics; The buzz around LK-99 and superconductivity. If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening! | — | ||||||
| 8/4/23 | ![]() #68 – Steven Teles on what the Conservative Legal Movement Teaches about Policy Advocacy | Steven Teles s is a Professor of Political Science at Johns Hopkins University and a Senior Fellow at the Niskanen Center. His work focuses on American politics and he written several books on topics such as elite politics, the judiciary, and mass incarceration. You can see more links and a full transcript at hearthisidea.com/teles In this episode we talk about: The rise of the conservative legal movement; How ideas can come to be entrenched in American politics; Challenges in building a new academic field like "law and economics"; The limitations of doing quantitative evaluations of advocacy groups. If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening! Key links: | — | ||||||
| 7/18/23 | ![]() #67 – Guive Assadi on Whether Humanity Will Choose Its Future | Guive Assadi is a Research Scholar at the Center for the Governance of AI. Guive’s research focuses on the conceptual clarification of, and prioritisation among, potential risks posed by emerging technologies. He holds a master’s in history from Cambridge University, and a bachelor’s from UC Berkeley. In this episode, we discuss Guive's paper, Will Humanity Choose Its Future?. What is an 'evolutionary future', and would it count as an existential catastrophe? How did the agricultural revolution deliver a world which few people would have chosen? What does it mean to say that we are living in the dreamtime? Will it last? What competitive pressures in the future could drive the world to undesired outcomes? Digital minds Space settlement What measures could prevent an evolutionary future, and allow humanity to more deliberately choose its future? World government Strong global coordination Defensive advantage Should this all make us more or less hopeful about humanity's future? Ideas for further research Guive's recommended reading: Rationalist Explanations for War by James D. Fearon Meditations on Moloch by Scott Alexander The Age of Em by Robin Hanson What is a Singleton? By Nick Bostrom Other key links: Will Humanity Choose Its Future? by Guive Assadi Colder Wars by Gwern The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter by Joseph Henrich (and a review by Scott Alexander) | — | ||||||
| 6/25/23 | ![]() #66 – Michael Cohen on Input Tampering in Advanced RL Agents | Michael Cohen is is a DPhil student at the University of Oxford with Mike Osborne. He will be starting a postdoc with Professor Stuart Russell at UC Berkeley, with the Center for Human-Compatible AI. His research considers the expected behaviour of generally intelligent artificial agents, with a view to designing agents that we can expect to behave safely. You can see more links and a full transcript at www.hearthisidea.com/episodes/cohen. We discuss: What is reinforcement learning, and how is it different from supervised and unsupervised learning? Michael's recently co-authored paper titled 'Advanced artificial agents intervene in the provision of reward' Why might it be hard to convey what we really want to RL learners — even when we know exactly what we want? Why might advanced RL systems might tamper with their sources of input, and why could this be very bad? What assumptions need to hold for this "input tampering" outcome? Is reward really the optimisation target? Do models "get reward"? What's wrong with the analogy between RL systems and evolution? Key links: Michael's personal website 'Advanced artificial agents intervene in the provision of reward' by Michael K. Cohen, Marcus Hutter, and Michael A. Osborne 'Pessimism About Unknown Unknowns Inspires Conservatism' by Michael Cohen and Marcus Hutter 'Intelligence and Unambitiousness Using Algorithmic Information Theory' by Michael Cohen, Badri Vallambi, and Marcus Hutter 'Quantilizers: A Safer Alternative to Maximizers for Limited Optimization' by Jessica Taylor 'RAMBO-RL: Robust Adversarial Model-Based Offline Reinforcement Learning' by Marc Rigter, Bruno Lacerda, and Nick Hawes 'Quantilizers: A Safer Alternative to Maximizers for Limited Optimization' by Jessica Taylor Season 40 of Survivor | — | ||||||
| 6/10/23 | ![]() #65 – Katja Grace on Slowing Down AI and Whether the X-Risk Case Holds Up | Katja Grace is a researcher and writer. She runs AI Impacts, a research project trying to incrementally answer decision-relevant questions about the future of artificial intelligence (AI). Katja blogs primarily at worldspiritsockpuppet, and indirectly at Meteuphoric, Worldly Positions, LessWrong and the EA Forum. We discuss: What is AI Impacts working on? Counterarguments to the basic AI x-risk case Reasons to doubt that superhuman AI systems will be strongly goal-directed Reasons to doubt that if goal-directed superhuman AI systems are built, their goals will be bad by human lights Aren't deep learning systems fairly good at understanding our 'true' intentions? Reasons to doubt that (misaligned) superhuman AI would overpower humanity The case for slowing down AI Is AI really an arms race? Are there examples from history of valuable technologies being limited or slowed down? What does Katja think about the recent open letter on pausing giant AI experiments? Why read George Saunders? Key links: World Spirit Sock Puppet (Katja's main blog) Counterarguments to the basic AI x-risk case Let's think about slowing down AI We don't trade with ants Thank You, Esther Forbes (George Saunders) You can see more links and a full transcript at hearthisidea.com/episodes/grace. | — | ||||||
| 6/7/23 | ![]() #64 – Michael Aird on Strategies for Reducing AI Existential Risk | Michael Aird is a senior research manager at Rethink Priorities, where he co-leads the Artificial Intelligence Governance and Strategy team alongside Amanda El-Dakhakhni. Before that, he conducted nuclear risk research for Rethink Priorities and longtermist macrostrategy research for Convergence Analysis, the Center on Long-Term Risk, and the Future of Humanity Institute, which is where we know each other from. Before that, he was a teacher and a stand up comedian. He previously spoke to us about impact-driven research on Episode 52. In this episode, we talk about: The basic case for working on existential risk from AI How to begin figuring out what to do to reduce the risks Threat models for the risks of advanced AI 'Theories of victory' for how the world mitigates the risks 'Intermediate goals' in AI governance What useful (and less useful) research looks like for reducing AI x-risk Practical advice for usefully contributing to efforts to reduce existential risk from AI Resources for getting started and finding job openings Key links: Apply to be a Compute Governance Researcher or Research Assistant at Rethink Priorities (applications open until June 12, 2023) Rethink Priority's survey on intermediate goals in AI governance The Rethink Priorities newsletter The Rethink Priorities tab on the Effective Altruism Forum Some AI Governance Research Ideas compiled by Markus Anderljung & Alexis Carlier Strategic Perspectives on Long-term AI Governance by Matthijs Maas Michael's posts on the Effective Altruism Forum (under the username "MichaelA") The 80,000 Hours job board | — | ||||||
| 5/13/23 | ![]() #63 – Ben Garfinkel on AI Governance | Ben Garfinkel is a Research Fellow at the University of Oxford and Acting Director of the Centre for the Governance of AI. In this episode we talk about: An overview of AI governance space, and disentangling concrete research questions that Ben would like to see more work on Seeing how existing arguments for the risks from transformative AI have held up and Ben’s personal motivations for working on global risks from AI GovAI’s own work and opportunities for listeners to get involved Further reading and a transcript is available on our website: hearthisidea.com/episodes/garfinkel If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening! | — | ||||||
| 4/20/23 | ![]() #62 – Anders Sandberg on Exploratory Engineering, Value Diversity, and Grand Futures | Anders Sandberg is a researcher, futurist, transhumanist and author. He holds a PhD in computational neuroscience from Stockholm University, and is currently a Senior Research Fellow at the Future of Humanity Institute at the University of Oxford. His research covers human enhancement, exploratory engineering, and 'grand futures' for humanity. This episode is a recording of a live interview at EAGx Cambridge (2023). You can find upcoming effective altruism conferences here: www.effectivealtruism.org/ea-global We talk about: What is exploratory engineering and what is it good for? Progress on whole brain emulation Are we near the end of humanity's tech tree? Is diversity intrinsically valuable in grand futures? How Anders does research Virtue ethics for civilisations Anders' takes on AI risk and whether LLMs are close to general intelligence And much more! Further reading and a transcript is available on our website: hearthisidea.com/episodes/sandberg-live If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening! | — | ||||||
| 4/3/23 | ![]() #61 – Rory Stewart on GiveDirectly and Massively Scaling Cash Transfers | Rory Stewart is the President of GiveDirectly and a visiting fellow at Yale’s Jackson Institute for Global Affairs. Before that, Rory was (amongst other things) a Member of Parliament in the UK, a Professor in Human Rights at Harvard, and a diplomat. He is also the author of several books and co-hosts the podcast The Rest Is Politics. In this episode, we talk about: The moral case for radically scaling cash-transfers What we can do to raise governments’ ambitions to end global poverty What Rory learned about aid since being Secretary of State for International Development Further reading is available on our website: hearthisidea.com/episodes/stewart If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening! | — | ||||||
Showing 25 of 89
Sponsor Intelligence
Sign in to see which brands sponsor this podcast, their ad offers, and promo codes.
Chart Positions
5 placements across 4 markets.
Chart Positions
5 placements across 4 markets.


