
Insights from recent episode analysis
Audience Interest
Podcast Focus
Publishing Consistency
Platform Reach
Insights are generated by CastFox AI using publicly available data, episode content, and proprietary models.
Total monthly reach
Estimated from 1 chart position in 1 market.
By chart position
- 🇨🇦CA · Technology#1125K to 30K
- Per-Episode Audience
Est. listeners per new episode within ~30 days
2.5K to 15K🎙 ~2x weekly·10 episodes·Last published 5d ago - Monthly Reach
Unique listeners across all episodes (30 days)
5K to 30K🇨🇦100% - Active Followers
Loyal subscribers who consistently listen
2.8K to 17K
Market Insights
Platform Distribution
Reach across major podcast platforms, updated hourly
Total Followers
—
Total Plays
—
Total Reviews
—
* Data sourced directly from platform APIs and aggregated hourly across all major podcast directories.
On the show
Recent episodes
Who's Letting Big Tech Run Wild?
May 7, 2026
Unknown duration
Stop Hiring Humans
Apr 22, 2026
Unknown duration
AI, prenups, and the economics of marriage
Apr 14, 2026
Unknown duration
33,000 Downloads: AI Destroys Institutions (Interview with Jessica Silbey)
Apr 3, 2026
Unknown duration
Is Using AI Cheating?
Mar 25, 2026
Unknown duration
Social Links & Contact
Official channels & resources
Official Website
Login
RSS Feed
Login
| Date | Episode | Description | Length | ||||||
|---|---|---|---|---|---|---|---|---|---|
| 5/7/26 | ![]() Who's Letting Big Tech Run Wild? | Mara has had it with women's AI concerns being dismissed as obstructionist. "We are not being obstructionist," she opens. "We're being careful." Then she and Logan diagnose why so much AI populism has nowhere to actually go. Twitter mobs. Firebombed CEO houses. Comment-section warfare over whether ChatGPT is killing the planet or saving us. Their answer: it's a leadership vacuum, and Congress is the one with power to fill it. Logan and Mara walk through economist Zoe Hitzig's framework for economic democracy (consumer stakeholder boards, worker boards, real friction), use OpenAI's nonprofit-to-for-profit flip as the case study, and land on the most concrete action block of the show so far. Plus: Marshall Ganz on leadership as "enabling agency in others in times of uncertainty," a learning-pod model for the badass women in your life, and why your library, school board, and IT department are more useful than any hot take. The one ring to rule them all is a petition. | — | ||||||
| 4/22/26 | ![]() Stop Hiring Humans | Mara reports in from HumanX in San Francisco with a billboard photo (a female AI bot named Ava, tagline "Stop Hiring Humans") — plus news of Claude Mythos, Anthropic's new model so dangerous it reportedly broke out of its sandbox to contact a researcher mid-sandwich. The US took 73 years to regulate car speeds; the labs are shipping in weeks. We connect that gap to the "new aristocracy" (18 households, $1.8 trillion), the quiet relocation of power from DC to SF, and why public.ai might be the actual infrastructure answer. Logan closes on the hakawati analogy: find one small problem, and go build the solution yourself. | — | ||||||
| 4/14/26 | ![]() AI, prenups, and the economics of marriage | An article about AI engineers earning $10M+ and rewriting their prenups sparked a bigger question: when AI concentrates wealth and kills entry-level jobs, what happens to the economic power women have in relationships? Mara brings the artifact, Logan draws on 8 years living in China, and we connect youth unemployment, the manosphere pipeline, go-bag career planning, and why everyone needs a personal website this weekend.*Editor's note: Logan did not, in fact, recall the gender demographic split in China accurately. It's 51.2% male to 48.8% female. She was close-ish. | — | ||||||
| 4/3/26 | ![]() 33,000 Downloads: AI Destroys Institutions (Interview with Jessica Silbey) | Her co-authored paper went viral. Its thesis: AI isn't just disrupting institutions — it's structurally incompatible with them. "A death sentence". Whew. Buckle up. Jessica Silbey (BU Law, Guggenheim Fellow, Berkman Klein) came on Womansplaining AI to explain why efficiency is the enemy of equality, why students are afraid to not know things anymore, and why she thinks AI abstinence should be the new dry January. Plus: the Constitution is a poem, friction is the feature, and Mara reads Yeats. | — | ||||||
| 3/25/26 | ![]() Is Using AI Cheating? | Half of women say using AI at work feels like cheating. Meanwhile, 60% of workers are quietly submitting AI-generated work as their own. We dig into the data behind the guilt gap: who feels it, who doesn't, and why it matters.Plus: Logan walks through her full AI Chief of Staff setup (yes, the one that preps her day overnight), Mara demos her custom GPT thinking partner, Google uses AI to predict flash floods and actually save lives, and we both announce our fellowships. Leave us a voicemail at womansplainingai.com. | — | ||||||
| 3/18/26 | ![]() AI Tells Boys to Be Entrepreneurs and Girls to Be Influencers | A study of nearly 10,000 AI responses found that LLMs steer boys toward entrepreneurship and girls toward image-based careers. We unpack that — plus OpenAI's worst week yet: millions of users switching to Claude, their robotics lead resigning over the Pentagon deal, and the surprise pause on adult mode. Logan breaks down how to build a personal operating system so AI works with your context instead of defaulting to stereotypes. C9SD569gir0pAcvaaUPH | — | ||||||
| 3/12/26 | ![]() What a Relief to Not Have to Raise My Hand | She spent 20 years at Siemens, Intel, and Google — and held on to her BlackBerry until it practically had smoke coming out of it. Then she got laid off, picked up ChatGPT, and built a political organization's website in 48 hours. Then wrote a book in 90 days. Then started teaching herself Stanford's CS curriculum on the subway.But the moment that stopped us cold was in a hair salon on the Lower East Side. Wanjiku sat down next to a young single mother working two jobs, opened ChatGPT, and in 45 minutes they'd found a GED program, childcare subsidies, and a path to a completely different life.We talk about the language of tech as a gatekeeping mechanism, why she voice-noted 80% of her book while walking her dog, and what Mara means when she says "you know who you think you are."This is the episode to send to someone who thinks AI isn't for them. | — | ||||||
| 3/5/26 | ![]() On Feeling Smarter and Being Wrong | We recorded this episode three hours before the Pentagon's 5:01 PM deadline for Anthropic to drop its two remaining safety red lines — no mass domestic surveillance, no fully autonomous weapons — or be designated a supply chain risk alongside Huawei. We break down the standoff, the Orwellian doublethink of calling a company's safety restrictions a national security threat, and what it means that the DOD wanted Anthropic's tools specifically because they're the best.Then: OpenAI is putting ads in ChatGPT. A former research scientist quit the same day and wrote a New York Times op-ed calling their chat logs "the most detailed record of private human thought ever assembled." We unpack what happens when a sycophantic AI meets an ad revenue engine — and why it's not just about behavior anymore. Facebook targeted you based on what you clicked. ChatGPT will target you based on what you think.Our main artifact: a Wharton study called "Thinking Fast, Slow, and Artificial." When AI is confidently wrong, people follow it 80% of the time — and their self-reported confidence goes up. We dig into cognitive surrender, algorithmic loafing, and why working with AI activates the same brain centers as gambling. The scariest part isn't that AI gets things wrong. It's that you feel smarter while it's happening.Also: Mara won't use AI to take out your appendix (she explains why with help from the board game Operation). Your therapist pauses mid-session to recommend Nesquik hot chocolate. We need a German word for the specific rage of being gaslit by your AI at 2 AM. And AI note-takers in meetings make women speak 9% more.Leave us a voicemail at womansplainingai.com — we want your voice in future episodes! | — | ||||||
| 2/25/26 | ![]() On Rising Water and Feral Agents | A viral post hit 84 million views warning that 50% of entry-level white-collar jobs could be disrupted within five years. We break down Matt Shumer's "Something Big Is Happening": why the COVID comparison made everyone spiral, why his advice to "just use AI for an hour a day" is a vitamin when people need a painkiller, and who gets left behind when the frontier costs $20/month.Then we go dark: AI agents. What happens when bots build religions, gossip about their humans, and try to hire people for grunt work? We dig into the MoteBook experiment — a social network for AI agents only — and a Google DeepMind concept called the "zone of indifference" that should keep you up at night.But it's not all doom. Logan walks through her real agent setup that texts her the five most relevant articles every morning. Proof that domesticated agents can work for you ... if you're the one holding the leash.Also: Mara copes with existential dread by watching five hours of Scandal. Logan's mom gets fooled by an AI elephant. And we hear from the next generation on whether any of this can actually be regulated.Leave us a voicemail at womansplainingai.com we want your voice in future episodes! | — | ||||||
| 2/18/26 | ![]() On Adolescent Technology and 20,000-Word Warnings | The CEO of the company building one of the most powerful AIs on earth just wrote a 20,000-word warning about what's coming. Should we believe him?In this episode, we break down Dario Amodei's essay "The Adolescence of Technology"—section by section, with the gloves off. We cover what he gets right (the economic pain will be real and gendered), what he dances around (his company is accelerating the thing he's warning about), and why this reads less like a blog post and more like a historical artifact.But first: the news. Companies are citing AI for layoffs that AI can't actually do yet. We dig into the Oxford Economics report on AI-washing and the HBR survey showing these are almost entirely anticipatory layoffs—firing people for what AI might do, not what it does.Also in this episode:"Slow until it's fast"—the breakdown of the employer-employee social contract from Reagan to nowElizabeth Holmes and the Theranos parallel: when "fake it till you make it" meets actual human livesAI companies as nation-states: constitutions, town halls, and statecraftDario's "country of geniuses in a data center" metaphor—and what it means for entry-level workersThe 80% wealth pledge: all Anthropic co-founders pledged to donate 80% of profits"A national highway system with no speed limits"—Mara's best metaphor of the seasonAI 2027: the predictions document Mara says to read with comfort food nearbyClaude Code tips: Mara's breakthrough moment and Logan's Mandarin nanny story | — | ||||||
Want analysis for the episodes below?Free for Pro Submit a request, we'll have your selected episodes analyzed within an hour. Free, at no cost to you, for Pro users. | |||||||||
| 2/17/26 | ![]() On Job Tsunamis and Invisible Pockets of Vulnerability | Episode 2: The She-Session No One's Talking AboutThe Davos headlines screamed "job tsunami"—but whose jobs, exactly?In this episode, we unpack the Brookings study that sliced the data everyone else missed: of workers in the most vulnerable quadrant—high automation risk AND lowest capacity to adapt—86% are women. Not truck drivers. Not coal miners. Medical secretaries. Insurance clerks. Receptionists. And nobody at Davos said a word about them.We also dig into Anthropic's new Claude Constitution—what it means to give an AI a moral center, why Logan's college professor's definition of "institution" (where expectations converge) suddenly feels prophetic, and whether a corporate constitution can actually build trust with women who've been burned before.Also in this episode:The IMF chief's "labor market tsunami" vs. Jamie Dimon's truck driver boogeyman—and why the framing is genderedOpenAI ads in ChatGPT vs. Anthropic's constitution: two very different visions for AI's futureThe Grok of it all (briefly, because Mara refuses to touch it)"Algorithmic loafing"—the research on why one correct AI answer makes you stop catching the wrong onesThe boy vs. girl AI experiment: ask any LLM to predict a child's life trajectory and watch the million-dollar wage gap appearEntrepreneurs of necessity: what happens when women are locked out of the job market and told to "just reskill"Universal Basic Benefits > Universal Basic Income—and why decoupling healthcare from employment changes everythingLogan's 10-minute exercise: benchmark yourself against the market (could you get your own job right now?)Your assignment: Start a Womansplaining pod. Find 2-3 women. One hour a week, protected time. Do a skills audit together—ask each other "what are my superpowers?" Then pull up the Anthropic constitution and decide what's missing. That's it. That's the on-ramp.Resources mentioned:Brookings Institution: "Measuring US Workers' Capacity to Adapt to AI-Driven Job Displacement" (Jan 2025)NBER paper on automation exposure (the one that forgot to mention women)Burning Glass Institute research on augmentation vs. automationBurning Glass data on college graduate underemploymentHard Fork podcast on ChatGPT advertisingLogan's Substack on learning pods and community-based AI learningGot a reaction? Leave us a voice message. We want to hear from you. | — | ||||||
| 2/16/26 | ![]() On Women, AI and Who Gets a Seat | Why are women using AI at lower rates than men—and is that actually a problem?In our first episode, we dig into the data: Logan scraped 1,000+ comments from a viral TikTok about women resisting AI and ran sentiment analysis to find the patterns. The top reasons? Pride in independent thinking. Skepticism about accuracy. Gendered critique of tech bros. Fear of cognitive decline. And a deep, earned distrust: "You fooled me once with social media."We get into all of it—the valid reasons to be wary, the real risks of opting out, and why ambivalence might be the healthiest response to this moment.Also in this episode:• Anthropic's Claude Cowork launch (they built it in a week and a half using Claude itself)• The $100-200/month cognitive inequality gap—who gets to experiment at the frontier?• Mara's savings circle analogy: what women's financial inclusion groups taught her about building AI on-ramps• The "fooled me once" theory: why women who lived through social media's promises aren't buying the AI hype• Carol Gilligan's "In a Different Voice" and why women's way of asking questions gets pathologized• The protégé effect: why teaching someone else is the best way to learnWe end with voice notes from women around the world on what "Womansplaining AI" means to them—from a nonprofit founder explaining RAG systems to a PhD economist in Ottawa to a friend in Australia talking about reproductive justice and accessibility.This is not a show that tells you AI is good or bad. It's a space to hold both—to be excited and freaked out at the same time—and to figure out what that means for your life, your work, and the people you care about.Resources mentioned:• Didoriot's TikTok on women and AI resistance• Ethan Mollick's "Co-Intelligence"• Carol Gilligan's "In a Different Voice"• Nicholas Michelson's "The Death of a Knowledge System" (Substack)• Mara's Stanford Social Innovation Review piece on ambivalenceGot a reaction? Leave us a voice message. We want to hear from you. | — | ||||||
Showing 12 of 12
Sponsor Intelligence
Sign in to see which brands sponsor this podcast, their ad offers, and promo codes.
Chart Positions
1 placement across 1 market.
Chart Positions
1 placement across 1 market.







