
Insights from recent episode analysis
Audience Interest
Podcast Focus
Publishing Consistency
Platform Reach
Insights are generated by CastFox AI using publicly available data, episode content, and proprietary models.
Est. Listeners
Based on iTunes & Spotify (publisher stats).
- Per-Episode Audience
Est. listeners per new episode within ~30 days
10,001 - 25,000 - Monthly Reach
Unique listeners across all episodes (30 days)
25,001 - 75,000 - Active Followers
Loyal subscribers who consistently listen
15,001 - 40,000
Market Insights
Platform Distribution
Reach across major podcast platforms, updated hourly
Total Followers
—
Total Plays
—
Total Reviews
—
* Data sourced directly from platform APIs and aggregated hourly across all major podcast directories.
On the show
Recent episodes
Robots Don't Have to Be Creepy. Meet the Dancer Reimagining Them. | Catie Cuan (Founder & CEO, ART Lab)
May 5, 2026
51m 05s
The Goblin in the Machine | FAFO Friday
May 2, 2026
35m 14s
AI doesn't do anything. We do. | Rumman Chowdhury on reclaiming agency and rejecting "moral outsourcing"
Apr 28, 2026
55m 18s
We Won a Webby Award! Who Could've Predicted That? And Are All Predictions Bunk Anyway?
Apr 25, 2026
38m 38s
"I Can't Believe It's Not Software!" Paul Ford on AI and the Asterisk*
Apr 21, 2026
45m 11s
Social Links & Contact
Official channels & resources
Official Website
Login
RSS Feed
Login
| Date | Episode | Description | Length | ||||||
|---|---|---|---|---|---|---|---|---|---|
| 5/5/26 | Robots Don't Have to Be Creepy. Meet the Dancer Reimagining Them. | Catie Cuan (Founder & CEO, ART Lab) | Catie Cuan's dad was in the hospital, surrounded by machines that were supposed to help him. Instead they made him feel alienated and afraid. Catie, a dancer-turned-roboticist, realized it's not enough for a machine to do its job — it has to be relatable, too. Today she's the founder and CEO of ART Lab, focused on what she calls the "interaction gap" between what a robot can do and how it makes us feel. Catie danced at the Metropolitan Opera Ballet and ran her own dance company before getting her PhD at Stanford and becoming an artist-in-residence at Google X, where she worked on the Everyday Robots moonshot — including teaching office robots that it's rude to cut between two people having a conversation. Now ART Lab is building a home robot that won't look anything like a robot, plus a new kind of AI model that conditions success on how the human in the room responds, not just whether the task got done. Listen for the case against humanoids, why the future of AI shouldn't live inside your phone, and a sneak peek at what our life with robots might look like.Chapters:(02:11) - “There will be billions of robots” – from dishwashers to elder care (04:45) - Why robots can be capable and still feel unsettling (08:00) - How robots could read your reactions and respond in real time (11:45) - What shape should robots take? (15:30) - The case against humanoids (19:00) - A nine foot robot hand and the wild future robot design could take (23:15) - What it's like to dance with robots (28:30) - “The robot just died” – when a live failure changed the whole performance (32:45) - Friendship loneliness and home robots (and why builders need to be clear about the future they are creating) (37:11) - Why the home may become robotics’ biggest use case (and what ART Lab is building) (40:06) - Robot tutors, homework help, and why teachers still matter most (43:51) - “We have a tremendous amount of agency” – choosing the future we build now (46:16) - Why inequality and access worry Catie most (and who gets left behind) (48:56) - Why builders need to get outside their own bubble Support Future Around & Find OutFollow Dan on LinkedInGet the free newsletterBecome a paid subscriber and help future proof FAFO! | 51m 05s | ||||||
| 5/2/26 | The Goblin in the Machine | FAFO Friday | I don't think we pause enough to marvel at how freakin' weird AI is. Here's an actual instruction from OpenAI to its latest model: "Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant." Apparently goblins and mythical creatures crept in when OpenAI released its "nerdy" personality a few models back and the mythical creatures have just proliferated ever since. It's a bizarre example AI bias and, as it's relatively adorable, one that OpenAI was happy to write about. But what else is lurking?That's the jumping off point for Kwaku Aning and me (Dan Blumberg) on this latest FAFO Friday edition, which plays off of Tuesday's interview with responsible AI expert Rumman Chowdhury. Along the way, we discuss AI personalities, TV commercials, and brand strategies, how AI thinks you should shoot a three-pointer, what gets lost when humans no longer write the code, and why we need (?) whimsical garbage cans. Plus, we tie a few stories together: why a reckoning is coming for the all-you-can-eat-AI-token-buffet, as the "millennial lifestyle subsidy" for AI is ending, tokenmaxxing, the growing (and bipartisan!) data center backlash, and why Earth's (AI-powering) solar panels may soon run 24/7 thanks to light redirected from outer space. Links:Where the goblins came from (OpenAI blog post)My interview with responsible AI expert Dr. Rumman Chowdhury (Future Around & Find Out)GitHub Copilot is moving to usage-based billing (GitHub announcement)‘The Most Bipartisan Issue Since Beer’: Opposition to Data Centers (NYTimes, gift link) Meta inks deal for solar power at night, beamed from space (TechCrunch)Support Future Around & Find OutFollow Dan on LinkedInGet the free newsletterBecome a paid subscriber and help future proof FAFO! | 35m 14s | ||||||
| 4/28/26 | AI doesn't do anything. We do. | Rumman Chowdhury on reclaiming agency and rejecting "moral outsourcing" | Rumman Chowdhury wants to remind you that “AI isn't doing anything.” We do things. AI is not to blame for layoffs or if you’re denied medical coverage. People are. Eight years ago, Rumman coined the term “moral outsourcing” to describe this excuse where we blame tech for decisions that people make. Why do the semantics matter? Because, Rumman says:In world one where, “AI did X,” it's very scary. It's like, “oh my gosh, this thing that is bigger and smarter than me has come and descended and now it's gonna wipe out every job. “ [But if we center on people, then we have agency and accountability and we can say] “no, you built a thing that was broken and flawed.” Rumman is the founder and CEO of Human Intelligence PBC, which is building evaluation infrastructure to make Gen AI systems safe, trustworthy, and compliant. She also served as the U.S. Science Envoy for Artificial Intelligence under the Biden administration, led AI ethics teams at Twitter and Accenture, and is a Responsible AI Fellow at Harvard.In this conversation:Why "moral outsourcing" is the sneakiest trick in tech — and how execs use AI as a shield for decisions humans madeHow to avoid — or at least how to mitigate — creating AI that’s biasedRed teaming AI and creating bias bountiesThe "grandma hack" and other ways regular people accidentally jailbreak AI modelsHow AI companies are quietly rewriting their terms of service to dodge liability when things go wrongWhy the benchmarks you see when a new model drops are "basically spelling tests"AI psychosis, parasocial chatbots, and the cold emails Rumman gets once a month from people who think AI is aliveWhat builders can do right now to take back agency — and why Rumman is more excited about agentic AI than anything that came beforeChapters:(00:00) - "The thing I believe in the most is human agency" (02:14) - Why builders have more agency than they realize (04:00) - What is a bias bounty? (06:41) - What 2,000 hackers at DEF CON found (09:40) - The grandma hack (11:30) - Why guardrails fall apart (14:54) - Anthropic's new bug-finding model and the cat-and-mouse game (19:10) - Why most evals are "basically spelling tests" (21:30) - How to actually evaluate an AI agent (27:16) - "Moral outsourcing" and the AI layoff lie (29:41) - Inside Rumman's tenure as U.S. AI Science Envoy (33:06) - The legal loophole AI companies use to dodge liability (36:31) - AI psychosis and the cold emails Rumman gets (39:36) - Why Google's AI overview is quietly dangerous (45:31) - The problem with "AI literacy" (49:01) - Can we trust anything we see anymore? (51:11) - What builders can do right now to take back agency Support Future Around & Find OutFollow Dan on LinkedInGet the free newsletterBecome a paid subscriber and help future proof FAFO! | 55m 18s | ||||||
| 4/25/26 | We Won a Webby Award! Who Could've Predicted That? And Are All Predictions Bunk Anyway? | We won the Webby Award for best tech podcast of 2026!!!I’m stunned! But Kwaku doesn’t like it when I say stuff like that, because as he reminds me in this “FAFO Friday” edition, “sometimes good things happen to good people.” OK, I'll take it. We won! And now I need to prepare a five word speech to give. "FAFO Fridays Are My Favorite" comes to mind...But really, who could’ve predicted this? And also, are all predictions bunk? Kwaku just returned from a week at “Big TED” and he reports back that the talk everyone is talking about is “Beware the power of prediction” from philosopher and AI ethicist Carissa Véliz. What do the story of Oedipus and your insurance premiums have in common? They are both driven by self-fulfilling prophecies, according to Véliz and she warns us, on stage and in her new book, that we should we wary of false prophets — and of relying on AI-driven predictions. Some predictions are useful she says, e.g. weather forecasts are great because the weather doesn’t care what you predict, but others become self-fulfilling prophecies: if an AI says someone is uninsurable and then you deny them insurance then yes, they are uninsurable, but were they before you (or your algorithm) said so? It all speaks to a powerlessness many of us feel. Speaking of which… Meta just rolled out employee surveillance that tracks keystrokes, mouse clicks, and periodic screenshots — to train AI on their employees' own jobs…. Someone threw a Molotov cocktail at Sam Altman's house… The anti-data-center backlash is getting physical. And (sorry) here’s a prediction, if people don’t start feeling like they have some agency, we’re going to see more of this (especially in an election year). But as Kwaku puts it, we are the fuel. AI does nothing without us, so let’s reclaim our agency, because…The Future Needs a Word. That’s one of the five-word speech options we consider. I’m drawn to it, but not sold on it, so please share your own suggestions…---FutureAround.com is the home for Future Around & Find Out. Go there to subscribe to the newsletter and to contribute to the show. And, as always, please tell a friend about the show. That's how podcasts grow. | 38m 38s | ||||||
| 4/21/26 | "I Can't Believe It's Not Software!" Paul Ford on AI and the Asterisk* | So what even is “real” software anyway? Someone builds an app over the weekend. It works. It looks good. And then the search begins — for the asterisk. Security? Design quality? Can it go to production? Paul Ford says we’re in a new era: "I can't believe it's not software!" Paul is the co-founder of Aboard, where he helps organizations build custom software quickly, using AI tools. He's also one of my favorite tech writers. You may know him from "What Is Code," the opus he wrote for Bloomberg Businessweek a decade ago or from his writing in the New York Times, including his recent opinion piece, The A.I. Disruption We’ve Been Waiting for Has Arrived. Or perhaps you’re hip to Ftrain, where he’s been writing for longer than we’ve had the word “blog.” In this conversation, recorded at Aboard’s podcast studio (Paul and his cofounder also host a great show), we dig into the strange new world where roles are colliding, software* gets built quickly, and no one is quite sure what to teach their kids.We get into:What Paul calls "the great search for the asterisk" — the moment someone demos an app and everyone scrambles to find the catchHow the power dynamic between engineers and everyone else is fundamentally shifting — and why that's both liberating and destabilizingWhy vibe coded prototypes are changing how agencies pitch and price their work — and why pricing is "very unresolved"The skills that actually matter now: client communication, systems thinking, and depth over velocityWhy "the environmental costs [of AI] have become essentially a truthful folk narrative to talk about how difficult and scary and painful it is to see your life get continually smashed into bits."What he's teaching his kids (hint: it's not to code)Chapters:(01:40) - “We’re in a funny moment now” – catching up on the ten years since “What Is Code?” (05:30) - “ You gotta stop fighting” - AI code is genuinely useful, caveats and all (08:44) - AI enables people who could never afford custom software to have it (09:50) - Why he knew he’d get yelled at for his recent piece in the NYTimes (13:00) - “AI washing” and job cuts (14:50) - Paul’s theory for why the market oscillates so wildly on AI news + are we going to vibe code our own DoorDash? (17:00) - What’s the hardest thing about building with AI right now? (19:36) - Hiring, the most in-demand skills, and “forward-deployed engineers” (27:50) - “Product is still hard” – in response to: “What is something that AI will never be great at?” (31:36) - “What is something that sounds like science fiction, but that will soon be real — and commonplace?” (32:46) - Why Paul is excited about world models (and thinks LLM’s are topping out) (36:06) - Why environmental concerns have become a “truthful folk narrative about how difficult and scary” AI is (39:26) - There is no magic solution for climate (but one positive thing AI can do is help digest climate data) (41:26) - Why kids should learn systems thinking Support Future Around & Find OutGet the free newsletterBecome a paid subscriber and help future proof this thing!Sponsor the show? Are you looking to reach an audience of senior technologists and decision-makers? Email me: dan@futurearound.com | 45m 11s | ||||||
| 4/16/26 | We're a Webby nominee for Best Tech Podcast! Please vote! And here are the FAFO highlights the Webby's loved so much | Hey everyone... so, in case you haven't heard... this show, Future Around & Find Out, has been nominated for a Webby for best tech podcast! *** VOTE HERE: https://vote.webbyawards.com/PublicVoting#/2026/podcasts/shows/technology ***I was kind of being chill about this. I am, admittedly, not my own best hype man, but then I got riled up when I heard the hosts of The Vergecast, one of the other nominees and last year's winner, complain that they weren't winning by enough votes and that they wanted to win by such a large margin that it -- quote -- hurts everyone's feelings. Well, those are my feelings Nilay Patel was talking about! Look, I like the Verge -- and I definitely didn't have them on my list of people I might feud with this years -- but f* those guys! Let's win this thing!So could you please vote? Today, April 16th is the last day to do so and we're currently just behind, in second place. The link to vote is in the show notes. You can also find it on the show's website at Future Around dot comAnd what is it you're voting for? Well, if you've been listening then you already know what this show is all about, but I also thought for newbies and even for long time listeners, it might be fun for you to hear exactly what the Webby judges listened to when they voted for FAFO to be a best tech podcast nominee. They ask for ten minutes of audio, so I made a highlight reel — and here it is.*** VOTE HERE: https://vote.webbyawards.com/PublicVoting#/2026/podcasts/shows/technology *** | 11m 00s | ||||||
| 4/14/26 | We Need Inventors. And Inventors Need Us. Pablos Holman on Finding and Backing Zero to One Builders | We live in a world where every crisis lands in your pocket the moment it happens. The result? We're more informed than ever — and somehow less capable of doing anything about it.Inventor and investor Pablos Holman has a diagnosis: we're spreading ourselves across every problem, which means we're solving none of them. His prescription is uncomfortable — pick one thing, go all in, and cut the noise.***QUICK PLUG: Future Around & Find Out is nominated for a Webby for best tech podcast! Voting is open now for the People's Choice Award. Please vote before April 16th! https://vote.webbyawards.com/PublicVoting#/2026/podcasts/shows/technology***Pablos is the co-founder of Deep Futures, where he hunts for inventors tackling world-scale problems: energy, water, food, waste, transportation. Not apps. Atoms. And thanks to advances in AI and software, these "impossible" problems are more solvable than ever — if the right people show up to back them.In this conversation, recorded at the fabulous PopTech conference, he makes the case that inventors are the most important creative class on earth — and the most invisible. They're undersupported, uncelebrated, and working alone in garages. Some of them are probably going to blow themselves up. Those are exactly the people he's looking for.We get into:Why doomscrolling is literally eroding your ability to make a differenceThe difference between craft (optimization) and creation (zero-to-one) — and why AI is great at one and struggling with the otherWhy you can name 100 musicians but fewer than two living inventorsHow solving energy unlocks clean water, sanitation, and climate — essentially for freeWhy software people are uniquely positioned to work on the hardest problems in the world right nowChapters:(01:15) - Why the world isn't as broken as your newsfeed makes it seem (03:00) - The sticky note exercise: how to pick the one problem worth your time (04:30) - Inventors are the most important creative class nobody talks about (07:00) - Living inventors you should actually know (09:00) - What AI is good at — and what it still can't do (12:30) - Why software people are the right ones to tackle deep tech problems (22:56) - Energy is the root problem — solve it and you solve a lot else (25:56) - Climate change needs a thousand solutions, not one big fix (28:26) - The fashion industry's dirty secret and what robots can do about it Links & ResourcesPablos Holman on LinkedInDeep Future: VC firm, book, and podcastSupport Future Around & Find OutFAFO is nominated for a Webby for best tech podcast! Vote now! Get the free newsletterAnd consider becoming a paid subscriber and help future proof this thing!Sponsor the show? Are you looking to reach an audience of senior technologists and decision-makers? Email me: dan@futurearound.com---Pablos's first appearance on the show covers his work at Blue Origin and Intellectual Ventures. Scroll in your podcast app to July 2025 to find that fun conversation. (Can listen before or after this one; not a prerequisite.) | 31m 36s | ||||||
| 4/11/26 | The Moon, the Mythos, the Mayhem | FAFO Friday | Hey, great news! We’ve been nominated by the Webby Awards for best tech podcast! Voting is open now and we’re in second place for the popular choice prize. Just behind The Verge. They really don’t need this win, but it would really help this show grow. Would you please (ask a friend to) vote for Future Around & Find Out? *** VOTE FOR FAFO ***OK, here’s this week’s FAFO Friday… (we record on Fridays and the show has Friday/weekend vibes, so just go with it no matter what day of the week it is :) This week, Kwaku and I…Gape at the moon in wonderAsk why we sent humans on this mission when space robots could’ve done the job (related: why climb Mount Everest?) Marvel at Anthropic’s new Mythos model, which they say is remarkably good at finding flaws in the world’s critical software — or is this just another example of their marketing savvy? — or both!?Dig into AI world models and Jeff Bezos’s (modestly named) Project PrometheusAsk whether we want robots in our houses (yes, but only if they’re dumb)Keep FAFO weird (because in the age of AI that’s how you prove you’re human)*** VOTE FOR FAFO *** | 33m 58s | ||||||
| 4/7/26 | Trust Is All That's Left: How AI Scrambles the Creator Economy | Jim Louderback Live from SXSW | Future Around & Find Out is a best technology podcast nominee! And with your help it could be a winner. The Webby Awards voting is open now. Please vote for FAFO! Thanks to AI, “content is about to become infinite.” And just like the Internet disrupted distribution, AI is disrupting creation. And so when anyone, anywhere can create content, what’s left? What’s defensible? That would be trust and humanity. Live from Podcast Movement Evolutions at SXSW, I sit down with Jim Louderback — former VidCon CEO, Inside the Creator Economy newsletter writer, and media veteran — to unpack what's actually changing and what builders and creators should do about it.We get into why the "age of perfection" is over, why founders need a meme instead of an elevator pitch, and why putting a creator on your cap table might be the smartest move a startup can make. Jim makes the case for a trust economy where views and likes are meaningless — and where the real question is how far your trust graph extends. We also talk digital twins (and what happens when yours goes rogue), why events are still the best way to prove you're human, the state of journalism and public media, and why 2004’s “Subservient Chicken” was so ahead of its time. Chapters:(01:30) - How AI disrupts creation (03:50) - The number of creators is about to double to 500 million (06:45) - We’ll have “certified human” labels, just like “organic” and why the Subservient Chicken was so far ahead of its time (08:40) - The age of perfection is over (10:00) - The only thing that matters is trust (12:00) - Events, FTW! (13:45) - The elements of a great event are timeless (18:11) - Favorite moments from SXSW (21:56) - What’s your meme? > What’s your elevator pitch? (23:28) - Put a creator on the cap table (27:21) - Creator-community fit (29:38) - The challenges of being a journalist today (32:26) - Create your own digital twin (36:26) - Why John Green’s jaw dropped when he learned of Dan’s grandma ---Future Around & Find OutVote for FAFO to be a Webby Awards winner!Get the newsletter Sponsor the show? Want to share your message with senior technologists? Email Dan: dan@futurearound.com | 39m 28s | ||||||
| 4/3/26 | The Fart App Era of AI Is Over. Now What? | FAFO Friday Vibes From SXSW Rooftop With Rob Kenedi | Very fun news… The Webby Awards have just nominated Future Around & Find Out as a nominee for Best Technology Podcast!!!And you can help make it a Popular Choice winner. Winning would be great for the growth of show. Thank you!Please vote for FAFO! ---OK, on to today’s episode… it’s another good vibes rooftop episode recorded at SXSW. For the second year in a row I’m joined by Rob Kenedi, a fellow podcaster, who is the founder and host of Decelerator. Last year he, very memorably, said we were in the “fart app era of AI”. Meaning: people are trying stuff (a la the make-a-fart-sound apps that people built in the early days of the iPhone). So, we revisit that comment and ask where are we now? And what’s defensible for app makers — and for creators like us?Podcaster that he is, Rob turns the tables on me and asks me a bunch of questions about how I’m approaching this question and I shared what was top of mind when I rebranded the show recently from CRAFTED. to Future Around & Find Out. Namely, I wanted to give the show more personality, but that is how you stand out right now and that’s what going to be defensible in the age of AI. And the Webby nomination — you already voted, right?? — only makes me feel more confident in the rebrand and the addition of more of these casual “FAFO Friday” episodes that feature a lot more of my (and regular guest Kwaku Aning’s) personality. (You’ll hear more about how AI is changing the creator economy and why “being human” is so important in a few days when my episode with Jim Louderback, writer of Inside the Creator Economy, comes out; he gives a great annual talk at SXSW and I had seen it the day before recording with Rob.)In the meantime, come join Rob and me for some good vibes from Austin… And (ask your friends to) vote: https://vote.webbyawards.com/PublicVoting#/2026/podcasts/shows/technology | 35m 08s | ||||||
Want analysis for the episodes below?Free for Pro Submit a request, we'll have your selected episodes analyzed within an hour. Free, at no cost to you, for Pro users. | |||||||||
| 3/31/26 | Cooling Earth with everything from mushroom bacon to giant sky parasols | Eben Bayer (climate-tech founder) | Climate-tech founder Eben Bayer is on a mission to protect Spaceship Earth. And he says it's time for climate control, i.e. active measure that cool the Earth. Why? " Because all other reasonable approaches have failed miserably," he says, slapping the table for emphasis.Eben is the co-founder of Ecovative and MyForest Foods, the makers of MyBacon, which is sold in more than three thousand stores. It’s a non-meat bacon, made from mycelium, which (more or less) means mushrooms roots. Fewer people eating meat —> fewer farting animals —> a cooler planet. And Eben's latest Earth-cooling idea is (nearly) out of this world. Eben wants to put giant parasols in the stratosphere where they could block sunlight from reaching Earth. With "shade-as-a-service" a maxed-out utility (say in Phoenix) could pay for shade to cool a city or an individual could pinpoint a shadow over their backyard for an afternoon barbecue.The idea is in its early stages, but Eben says it's feasible and it's the kind of big idea we need to get climate change under control. And while the idea of messing with the sun may sound scary, he says we alter the climate in all sorts of ways already: " We are geo-engineers. We farm animal livestock. We live on Planet Earth. We have impact. We emit CO2. We should not limit ourselves to modifying just one or two atmospheric gases to modify the planet. It's not how we operate, and it's an unbelievably constraining framing if you actually want to address this problem in a practical manner... When you start to take that frame, the options open way up."Eben is a fascinating guy — very steampunk in his approach to entrepreneurship — and I'm sure you'll find this interview eye-opening.And a special shout out to my field producer for this onsite recording from Troy, NY: my eleven-year old son, Julian! He was my camera and sound guy and he also makes his long-awaited (YouTube!) debut to ask Eben a question about protecting Spaceship Earth. 🤩Thanks also to PopTech, the amazing tech conference where I first met Eben, and where he became a fellow more than a decade ago as he was just scaling up.Support Future Around & Find OutFollow Dan on LinkedInGet the free Future Around & Find Out newsletterBecome a paid subscriber and help future proof the podcast!Sponsor the show? Are you looking to reach an audience of senior technologists and decision-makers? Email me: dan@futurearound.com---Music by Jonathan Zalben | 35m 58s | ||||||
| 3/28/26 | Melania's Humanoid Guest, Robot Teachers, and What We Lose When Learning Is "Instant" | “Imagine a humanoid educator named “Plato”… Access to the classical studies is now instantaneous: literature, science, art, philosophy, mathematics, and history. Humanity’s entire corpus of information is available in the comfort of your home.”— Melania Trump, FuturistAh, yes, I can’t wait for my children to learn from an embodied AI. And that their access to everything be “instantaneous.” No struggle. No unreliable (fleshy) teachers. Just an embodied AI stuffed with the “entire corpus of information.” What an inspiring vision!Regular listeners to Future Around & Find Out will know that I’m a fan of robots (think: self-driving cars), but really don’t understand why they need arms and legs (whether dog- or human-shaped). Well, as you may have seen our fever dream of AI with arms and legs reached the White House, with Melania and “Figure 3” competing to see which one could walk and talk more haltingly. (The robot was more engaging to listen to.) The robot was there, along with a patronizing display of first spouses from around the world, for a summit on education technology. So Kwaku and I use it as a jumping off point for this week’s FAFO Friday (yes, delivered on a Saturday) this week. Kwaku, a tech consultant to many schools, and I discuss this insatiable need for humanoid robots, AI, and instant gratification. And, following up on my conversation earlier this week with Khan Academy’s Chief Learning Officer, Kristin DiCerbo, we discuss what counts as a “productive struggle” and what’s wasted effort when it comes to AI and learning. Please enjoy this very human conversation… full of totally unnecessary tangents, riffs, asides, non-sequiturs, and other detours that Plato, the humanoid teacher, would find inefficient and useless. 🙂---Subscribe to the Future Around & Find Out newsletter: https://www.futurearound.com | 34m 21s | ||||||
| 3/24/26 | What should kids study? How should AI help? Khan Academy's learning chief on productive struggle | So what the heck should kids be studying today!?That's my opening question to Khan Academy's Chief Learning Officer, Kristen DiCerbo, a learning and AI expert who is back for her second appearance on the show. We discuss:Kristen's advice for what to study today: fundamentals, AI literacy, and critical thinkingHow helpful should AI be? Why the productive struggle is critical to learning, but also why we shouldn't "fetishize" struggleWhen should AI be Socratic — "and why do you think that?" — and when should it just give you the answer?The "5% problem" — why edtech that's proven to work still barely gets usedHow Khan Academy overhauled its classroom platform and evolved Khanmigo from a standalone chatbot into something woven into the whole learning experiencePersonalization that actually works How Khan Academy uses LLMs as judges to evaluate 20,000 student interactions a dayThe scenario planning report that imagines deepfakes of school principals and AI going underground in schoolsWhat parents should be asking their kids' schools about AI right nowWhat it looks like when a school implements AI well — and what it looks like when they don'tChapters:(01:44) - What the heck should kids be studying right now? (03:55) - Teaching critical thinking in the age of AI (06:37) - What successful schools are doing differently (08:37) - The real risk: not that kids use AI too much, but that they don't use it enough (10:52) - My 13-year-old has to check five apps just to find his homework (11:52) - "Beyond the AI inflection point" — three scenarios, none of them great (16:30) - Should we just make every school a trade school? (19:41) - What should parents be asking their kids' schools? (22:27) - Khan Academy's Winchester Mystery House problem (26:28) - Personalized learning — what works and what surprisingly doesn't (29:32) - Kids are bad at asking questions and that's actually the point (32:01) - "I DON'T KNOW" in all capital letters — the Socratic method's breaking point (34:26) - Should an AI tutor give tough love? (37:01) - Why Khanmigo is fundamentally different from ChatGPT (40:11) - Don't fetishize struggle — but your kid still needs it (42:39) - Khan Academy's productive struggle: building evals from scratch (45:41) - What gives Kristen optimism Support Future Around & Find OutFollow Dan on LinkedInGet the free Future Around & Find Out newsletterBecome a paid subscriber and help future proof the podcast!Sponsor the show? Are you looking to reach an audience of senior technologists and decision-makers? Email me: dan@futurearound.com---Music by Jonathan Zalben | 46m 58s | ||||||
| 3/20/26 | Glimpsed at SXSW: Robot Soccer, AI Sweet Nothings, and Pants That Do the Walking | It's FAFO Friday | South By Southwest was strange this year. No convention center to anchor the event (it’s a giant hole in the ground right now, being rebuilt from scratch, much like [insert your analogy here] will also need to be rebuilt in the age of AI). This South By was a all about convergence. How AI will impact [xyz] continues to be the dominant theme at the conference and in so much tech coverage (including on this podcast; sorry!). So, Kwaku and I report on the convergences we saw (and not only at Amy Webb’s annual talk where “convergence” was her key word). This includes everything from:the RoboCup, a quest (a la Deep Blue winning at chess) for humanoid robots to be able to defeat a team of great humans at soccer pants that you wear (or do they wear you) that are kind of like an e-bike for your legsan AI-powered Cyrano de Bergerac that can help you whisper sweet nothings in your lover’s earfalling in love with an AI (and their business model)and AI that can tell you whether to have another slice of brisket (yes, duh, you’re in Austin!) So, come on along to Austin for what’s become an annual tradition: Kwaku and my SXSW Rooftop Revue. This year recorded in fabulous 4K with a three camera setup that we didn’t deserve! Big thanks to Podcast Movement Evolutions, Nomono, The Podcast Academy, and Simplecast!And stay tuned for a few more episodes from a wild week!Chapters:(00:25) - SXSW 2026: everything everywhere all at once (01:23) - Kwaku stumbles into a World Economic Forum session on convergence (05:54) - Reinforcement learning and robot soccer (09:07) - Amy Webb's three convergences: emotional outsourcing, unlimited labor, human augmentation (09:55) - Pants that are an e-bike for your legs (11:27) - The mental tax of running a fleet of AI agents (13:28) - Your boss wants you to pay for your own augmentation (16:07) - Esther Perel, Spike Jonze, and falling in love with Her business model (18:55) - An AI Cyrano de Bergerac to help you win your lover’s heart (25:30) - IRL is the antidote! ---Future Around & Find OutGet the newsletter, support the show, check out past episodes: https://www.futurearound.com | 33m 21s | ||||||
| 3/17/26 | "Train the Monkey First" — How Google Built a Moonshot Factory | Astro Teller (Captain of Moonshots) | How do you build a system for turning wild ideas into world-changing innovations? Astro Teller, Captain of Moonshots at X, The Moonshot Factory, has spent over 15 years leading Google’s audacious innovation lab—the birthplace of Waymo, Google Brain, and other breakthrough projects.In this special episode, recorded live in Austin at last year's SXSW, Astro shares the playbook to create a moonshot factory.(I'm at this year's SXSW right now and you'll hear all about it soon. If you are here, drop me a line and let's meet up!)What You’ll Learn in This Episode:The “Train the Monkey First” approach to innovationWhy audacity, humility, and intellectual honesty are key to moonshotsHow your org can get more 10x (not +10%) outcomes — and how to avoid the “innovator’s dilemma”Why you should “greenlight everything” and then redlight most projects quickly, following kill criteria you’ve agreed to in advanceWhere X is placing bets today, including climate-tech, modernizing the electric grid and bioengineering---Future Around & Find Out newsletter and podcast: https://www.futurearound.com | 31m 23s | ||||||
| 3/13/26 | BONUS: A quick riff on that weird Anthropic graph with Paul Ford | FAFO Friday | Greetings from SXSW, where I'm learning, recording, and eating... You'll hear all about it soon... For now, enjoy this short, sweet, and geeky bonus episode.Have you seen that weird graph about all the jobs that AI is going to kill? It looks like an ink blot or a Rorschach test... It's from an Anthropic report and it's really making the rounds. If you follow tech stuff on social media you've probably seen it. The report is interested, but I'm convinced people are only sharing it because the graph looks cool and people will think they're smart if they share this inscrutable data visualization... Anyway, here's a very short excerpt of my upcoming interview with Paul Ford (@ftrain), one of my favorite tech writers and the founder of Aboard. He and I took a break from talking AI and such to geek out on this data visualization and why it's so bad, plus I told him about how I used AI to make my own version of a radar graph (about how many, and which kinds of, tacos I will and could theoretically eat in Austin). ---Subscribe to the Future Around & Find Out newsletter! | 4m 04s | ||||||
| 3/10/26 | "It Sounds Like Something From Marvel" — Building an Antivirus for AI... With AI | Daniel Hulme (Founder, Conscium) | So why is one of the world’s leading AI researchers teaching AI to understand pain and suffering? Well, Daniel Hulme says that if we build an empathetic AI, perhaps even a conscious one, then we’ll be safer. His hypothesis is that a "zombie" AI will eat our brains, but an empathetic AI would stay aligned with us. So he's building this "antivirus" (with AI, of course) and he's very aware that this sounds crazy or like "something from Marvel."That's just some of what broke my brain in this conversation with one of the world's top AI researchers and founders. And Daniel has serious credibility, so I'm not dismissing the threat he sees — you know, the one where we all get turned into paperclips. Daniel sold his company Satalia to WPP, where he now serves as Chief AI Officer. He’s just founded Conscium, which verifies that AI agents are safe and can do what they promise — and is also researching consciousness and pain. Some of the world’s leading AI thinkers are on the advisory board and Daniel has been in this space for decades: we’ll talk about why, for his PhD, he studied bumblebee brains (yes, really — and it's deeply relevant). We get into: His unified theory of consciousness — his "color wheel" model — and why he thinks consciousness only exists in motion Why he believes large language models are ultimately a dead end — and what neuromorphic computing could replace them with What bumblebee brains can teach us about building AI that's up to a thousand times more energy efficient Why he calls today's AI agents "intoxicated graduates" — and says companies should spend 80% of their time testing them The concept of "mind crime" — the idea that we could build conscious AI and accidentally put it through horrendous suffering without realizing it His vision of a "protopia" — where AI makes food, healthcare, education, and energy so abundant that people are freed from economic constraints to pursue what actually mattersWe future around and find out a lot in this one! ---Chapters(01:39) - "Would a conscious superintelligence be safer than a zombie one?" (03:37) - The paperclip problem is not hypothetical (05:06) - Conscium's mission — AI safety for humans and for AI themselves (08:50) - "I think I've got my head around consciousness" (11:57) - The color wheel model — why consciousness only exists in motion (13:58) - Teaching AI morals through evolution, not guardrails (17:23) - "Hey Claude, are you conscious?" — how do you test for that? (21:07) - What bumblebee brains can teach us about building better AI (24:14) - "I think we are completely scaling wrong" (29:43) - Why Daniel calls AI agents "intoxicated graduates" (32:48) - Companies should spend 80% of their time testing agents (38:19) - "What would you do if you were economically free?" ---LinksConsciumDaniel Hulme on Wikipedia Daniel on LinkedIn--- | 42m 19s | ||||||
| 3/6/26 | Choose Your Own Adventure | It's FAFO Friday | So how do Kwaku's kids know that it's FAFO Friday? "They're like, 'oh, we know you're doing the podcast 'cause we just hear you cackling through the walls.'"So laugh along with Kwaku and me today as we work our way through a quick victory lap (stuff we said would happen last week happened!), why Sam is like that desperate guy at the bar who refuses to go home alone, quantum computing explained via children's literature, why the Jetsons are not reason enough for us to build humanoid robots, robot choreography (are we human or are we dancers?), wen self-driving cars in NY?, riding a wave of green lights up Manhattan's third avenue at 2 AM, artificial wombs and other moonshot off-shoots, and the real origin of Velcro (AI lied to me about it).Plus... goat ranches, breakfast tacos, and what we're most excited about heading into SXSW. It's a choose your own adventure kind of day.Chapters(01:24) - Victory Lap — We Called It (03:35) - OpenAI's Bar Guy Energy (06:38) - Waymo, Robot Choreography, and Green Light Waves (10:16) - Self-Driving Cars vs. New York Politicians (13:13) - What We're Most Excited About at SXSW (15:41) - Quantum Computing: Choose Your Own Adventure Edition (18:01) - Dire Wolves, Moonshots, and Tech Nobody Sees Coming (24:07) - Why Do Robots Need to Look Like Us? (29:22) - The SXSW Way-Back Machine (36:08) - Increased Regulation: Past, Present, or Future? Support Future Around & Find OutFollow Dan on LinkedInGet the free Future Around & Find Out newsletterBecome a paid subscriber and help future proof the podcast!Sponsor the show? Are you looking to reach an audience of senior technologists and decision-makers? Email me: dan@modernproductminds.com---Music by Jonathan Zalben | 42m 24s | ||||||
| 3/3/26 | Dead as a Dodo? Maybe Not! Colossal's Beth Shapiro on the Science of De-Extinction — and Moonshots | So, there are dire wolves living on Earth again. They were “de-extincted” by Colossal Biosciences. And today on the show their Chief Science Officer joins me to share her view on why the de-extinction matters — not as a science project, but because it will help solve problems that threaten every species on earth, including us. Beth Shapiro is the Chief Science Officer at Colossal Biosciences, and she helped to bring back the dire wolf or, as others call it, a gray wolf with 20 genetic edits. There is a fierce debate about what de-extinction even means, and we discuss that, but whatever you call them, there are now three big wolves living in an undisclosed location and they wouldn't be there if not for the DNA that Beth and her team edited. Colossal is also working to bring back the wooly mammoth, the Tasmanian tiger, the dodo and other animals that have long been extinct. Why? Listen to find out… Chapters:(01:19) - The Most Oprah Question Beth's Ever Been Asked (03:04) - Moonshots Require You to Create a Giant List of Problems (04:19) - The Things We’ll Solve Along the Way, a la the Original Moonshot… to the Moon (05:57) - Beth’s Journey: From Broadcast Journalism to Ancient DNA (09:13) - How a Sediment Core Solved a Mammoth Mystery (11:36) - Why Charismatic Animals Matter (a.k.a. Why Riz Is Everything) (12:38) - What’s Up With Romulus, Remus, and Khaleesi? (14:19) - But Are They Really “Dire Wolves”? The Controversy Over 20 Genetic Edits (21:45) - Should We Do This? Beth's Ethics Framework for Builders (23:51) - Advice for Moonshot Builders (25:10) - Why We Want Dodos, Mammoths, and Thylacines Back Links & Resources:Colossal BiosciencesBeth ShapiroPopTech -- a conference I love! Support Future Around & Find OutFollow Dan on LinkedInGet the free Future Around & Find Out newsletterBecome a paid subscriber and help future proof the podcast!Sponsor the show? Are you looking to reach an audience of senior technologists and decision-makers? Email me: dan@modernproductminds.com---Music by Jonathan Zalben | 33m 12s | ||||||
| 2/27/26 | To Accede or Not To Accede? That Is The Question | It's FAFO Friday! | Murderbots, mass layoffs, and media takeovers — all in one news cycle. Anthropic told the Pentagon "we will not accede." Block cut half its workforce overnight. And the Paramount-Warner Brothers deal raises real questions about who's running the media now.Also, thanks to Nicolás Maduro's fashion sense, Dan's 13-year-old is being called Lil Tator at school and honestly? The kids are all right. Happy FAFO Friday!Here's some of what Kwaku Aning and I get into:(00:00) - Three Stories Broke Last Night (03:16) - Anthropic Tells the Pentagon No (06:24) - Murder Bots, But Human in the Loop (07:00) - The Pentagon's Friday Deadline (09:28) - Why This Is a Huge Win for Anthropic (10:50) - The War for AI Talent (12:57) - Is the Administration Losing Steam? (15:05) - The Paramount-Warner Brothers Deal (17:36) - Who Controls the Media Now? (21:13) - CNN, Independent Media, and the Employee Perspective (23:55) - Block Lays Off 4,000 People (24:14) - The Citrini Research Fiction That Tanked Stocks (27:49) - AI Washing and the Real Reason for Layoffs (30:11) - Will Vibe Coding Replace Real Companies? (33:27) - Mid-Roll Break (34:41) - Past, Present, Future: State-Controlled AI (35:18) - Past, Present, Future: Independent Media (38:03) - — SLAPP Lawsuits and Creator Protections (40:23) - — Past, Present, Future: Knicks Championship (41:44) - — Come See Us at South by Southwest! | 42m 40s | ||||||
| 2/24/26 | "I just want AI to replace me as a scientist" | The co-founder of Diagnostic Robotics predicts the future | Of all the industries AI will transform, Kira Radinsky believes chemistry and biology will change the most. Kira is the co-founder and CTO of Diagnostic Robotics, which uses AI to automate the administrative work that's crushing healthcare teams — so clinicians can actually focus on patients. She's also the co-founder of Mana.bio, where they're accelerating drug discovery by orders of magnitude.She'll tell you she's terrible in the lab. Not because she isn't brilliant, but because she can't pipette without killing the cells. So she’s thrilled that thanks to her skills in data and AI she was able to realize her childhood dream of being a scientist: “I'm not trying to automate everything… Like when, when you say automate drug discovery, I'm not gonna discover everything. I just want to accelerate it, which comes back to my childhood dream: I just didn't want to do it myself. I just want AI to replace me as a scientist. That's it.”But this episode is about more than healthcare. It's about how to build systems that get smarter over time — feedback loops, causal inference, incentivizing algorithms to take risks, and knowing when to optimize for ROI instead of accuracy. Lessons that apply whether you're building in biotech or not.We cover:How growing up Jewish in Soviet Ukraine — and fleeing to Israel just before the Gulf War — shaped Kira's obsession with predicting the futureHow she built a system that successfully predicted real-world events, including Cuba's first cholera outbreak in Cuba in 130 yearsHow Mana.bio is using AI to build "rocketships" that deliver drugs to the right cells — and how they've done in three months what used to take 20 yearsWhy predictions are only valuable if there's something you can do about them — and why that makes healthcare an ideal field for AI How to incentivize algorithms to make bolder predictions (it's easy to predict there won't be an earthquake today; it's much harder to say there will be)Why causal inference is the most underrated tool in machine learning right nowHow healthcare AI can perpetuate racial bias — and what builders need to do differentlyNote: this interview originally aired in October 2024. Chapters:(01:44) - Why predictions are so important to Kira: lessons from fleeing Soviet-era Kyiv (05:10) - Building a prediction engine from 150 years of news (08:35) - How Kira predicted the Cuba cholera outbreak (09:50) - Returning to biology by way of data (12:50) - Predicting healthcare outcomes by finding your patient's twin (17:53) - The racial bias hiding in healthcare AI (19:15) - Building Mana.bio and accelerating drug discovery (24:33) - "In three months, what did what used to take 20 years" (31:44) - Builder tips: ROI, causal inference, and teaching algorithms to explore (35:07) - Planning: Where generative AI needs improve Links & Resources:Kira Radinsky on LinkedInDiagnostic RoboticsMana.bioSupport Future Around & Find OutGet the free newsletterAnd consider becoming a paid subscriber and help future proof this thing!Sponsor the show? Are you looking to reach an audience of senior technologists and decision-makers? Email me: dan@modernproductminds.com---Music by Jonathan Zalben | 38m 47s | ||||||
| 2/20/26 | AI Delivers Mediocre Results—By Design. So How Do You Stand Out? | MetaLab CEO Luke Des Cotes | You probably know by now that AI is the definition of mediocre. As in: it’s the average of everything it’s been trained on. So how do you get beyond average? How do you build a moat? It certainly doesn’t seem to be via the models. While there are models of the month (hey, Opus 4.6, my new friend!), they seem to be pretty swappable. So, the model ain’t it. But proprietary data (e.g. an AI that knows you really well), yes! Or doing something really hard in the real world (think: Waymo self-driving cars). Maybe via trust and safety (Anthropic is certainly making a play here). Or... how about via amazing design and good taste. Remember when ChatGPT first came out and everyone derided “AI wrappers”… well, maybe a wrapper isn’t so bad, assuming you can differentiate on one or more of the above. Luke Des Cotes is the CEO of MetaLab, the agency famous for designing interfaces, including early versions of Slack and Coinbase, so don’t be shocked when you hear him say that great design can be your moat. MetaLab is working with a host of AI companies (another shocker), including Windsurf (AI + code), Suno (AI + music), Pika (AI + video), and more…, which is why Luke's take on AI surprised me. He's not rah rah. He's pretty judicious actually. Luke has questions about AI's costs and appropriateness for lots of use cases like those involving kids, but mostly he objects to its mediocrity.On this episode we discuss what it takes to go beyond.We also get into:Why vibe-coded software isn't changing the world anytime soonWhy Shopify acquired a design agency right after telling employees to justify their existence against AIHow MetaLab designers are using AI to prototype in hours instead of weeksThe talent market for zero-to-one designers — and why they're harder to find than everLandlines, brick phones, and how parents are fighting back against always-on kidsChapters(01:10) - "It's a race to the mean" (03:10) - "How do you create emotional resonance?" (05:33) - AI companies are burning money (08:44) - Speed to good enough (13:51) - Is the chat here to stay or a temporary fad? (17:43) - It’s hard to find great 0 to 1 design talent (22:28) - Seemingly conscious AI (25:05) - Kids, landlines, and fighting always-on culture (27:21) - Sounds like science fiction, but is here now… Links & ResourcesLuke Des Cotes on LinkedInMetaLabSupport Future Around & Find OutGet the free newsletterAnd consider becoming a paid subscriber and help future proof this thing!Sponsor the show? Are you looking to reach an audience of senior technologists and decision-makers? Email me: dan@modernproductminds.com | 31m 42s | ||||||
| 2/17/26 | Could AI Make Capitalism Better? Henrik Werdelin Is Optimistic | Henrik Werdelin is one of my favorite entrepreneurs. He’s founded and incubated several unicorns, most notably BARK, the dog happiness company.Henrik himself is a pretty happy guy — an optimistic guy who likes to ask what could go right? — and on the day we recorded (a few months ago as I was squirreling away interviews for the podcast relaunch), he helped me see through some future of tech gloom I was feeling. I honestly can’t even remember what Trump+tech hellscape we were living through that week, but I do remember that Henrik put me in a better mood. I think he’ll do the same for you, no matter how you’re feeling. 🤗Henrik believes AI could be a massive force for good. That it could bring forth a whole new — a better! — form of capitalism. He writes about this is in his latest book, Me, My Customer, and AI. He points to those (like Henry Ford) who took advantage of electricity by making drastic, not incremental, changes to how the build things. Our conversation pairs nicely with my recent episode with Azeem Azhar, who said the AI winners will “come from odd places”, as they have in previous tech transformations. Here’s more of what Henrik and I cover:His concept of "relationship capital"—the moat AI can't clone—and why the companies that win next will be defined by who they serve, not what they makeThe three components of relationship capital: intensity, community, and durabilityThe "it sucks that" method for finding problems worth solving (he took it to a fifth grade class; the teacher was not thrilled)His vision for the "headless", agentic web, where your startup's MVP is a group of agents, not an appThe wildly practical AI tools he's built just for himself: a custom CRM that searches by vibes not names, a newsletter bot tuned to his quarterly goals, and an agent that handled his visa paperwork while he was in a meetingWhy entrepreneurial skills—agency, narrative, resourcefulness—are the ultimate career insurance, whether you start a company or notThe absolutely ridiculous story of how a prank on a cruise ship led to him meeting his BARK co-founder in a heart-shaped bedChapters(01:43) - Two Futures: AI Bad vs. AI Really, Really Good (05:44) - Why Positivity Is Actually the Riskier Bet (09:05) - Electricity, AI, and the Rise of Relationship Capital (11:12) - The Three Components of Relationship Capital (14:20) - "It Sucks That" — The Best Way to Find a Real Problem (19:22) - The Headless Future and Minimum Viable Agents (22:40) - N-of-One Software: Building Tools Just for Yourself (26:48) - Henrik's Custom Newsletter Bot and AI-Powered CRM (30:59) - Warp, Obsidian, and Letting Agents Loose on Your Computer (34:45) - Entrepreneurial Skills as Career Insurance (36:53) - The Heart-Shaped Bed: How Henrik Met His BARK Co-Founder Links & ResourcesHenrik Werdelin on LinkedInAudos, Henrik’s latest venture where he hopes AI agents trained in his methods can help thousands of entrepreneurs (donkeycorns!) a yearBeyond the Prompt podcast, from co-hosts Henrik Werdelin and Jeremy UtleySupport Future Around & Find OutGet the free newsletterAnd consider becoming a paid subscriber and help future proof this thing!Sponsor the show? Are you looking to reach an audience of senior technologists and decision-makers? Email me: dan@modernproductminds.com---Music by Jonathan Zalben | 39m 22s | ||||||
| 2/13/26 | "Shut Up, C-3PO!" or Do We Have a Duty To Treat Machines Well? | FAFO Friday | Is AI conscious? Will it be someday? And should we be nice to it now... just in case?This FAFO Friday, Kwaku and I dive into the mind-bending world of machine consciousness.We cover a lot of ground, weaving from the different ways that Luke (co-dependent with R2) and Han (barking commands at C-3PO) treat their droids to whether Pascal’s Wager informs whether we should believe in AI consciousness just in case they do come alive and have been keeping score. (Pascal figured it was the safe bet to believe in God, just in case; maybe we should do likewise?) That’s from us knuckleheads, but we’ve also got a true expert on consciousness. This week I interviewed Daniel Hulme, one of the world’s leading AI researchers. He’s the Chief AI Officer at WPP, the CEO of Satalia (which WPP bought) and just founded and is CEO of Conscium, which is researching AI consciousness, efficiency (he thinks we’re scaling wrong and LLM’s are not the way), and building a platform to verify AI agents are safe. You’ll hear the first five minutes of my interview with Daniel. Daniel was not surprised by Moltbook (the Reddit-style site that AI agents built for themselves). That’s because he’s been putting agents together (in a “primordial soup” as he put it) for decades to observe the wild and wonderful ways they behave and to see if they’d create intelligence.Daniel does not think today’s agents are conscious, but can see a path to it. And he believes that a conscious superintellignece would be safer than a “zombie” one. But mostly he doesn’t want machines to feel pain and suffer. Huh???My brain is still kind of broken from our hourlong chat, which I’m producing now and will be released in a few weeks. For now, enjoy this preview and more from Kwaku and me as we talk about what we expect from machines, whether we want to be one with them, and more…---Music by Jonathan Zalben | 18m 50s | ||||||
| 2/10/26 | Everyone's “Jumpy” Right Now: Azeem Azhar on When—Or Is It If?—AI Can Be Profitable | Everyone's feeling jumpy about AI right now—and for good reason.The hype has been massive. The investment has been astronomical. But where's the actual return?In this episode, Azeem Azhar, founder of Exponential View and advisor to tech leaders and governments, breaks down why the next 18 months are make-or-break for AI. Companies need to prove there's real ROI, not just prototypes launched and tokens spent.We cover:What hard evidence would actually prove AI is working (hint: it's not usage metrics)Who can build a real moat with AI—and why the winners will likely come from unexpected places, as they have in previous tech transformationsThe physical constraints nobody wants to talk about: chips, data centers, power grids, and whether America's infrastructure is up to the taskWhy OpenAI's "ubiquity strategy" might be spreading too thin (and what Anthropic is doing differently)The "pragmatic addicts" problem: we're dependent on AI even though we don't trust itHow Azeem and his team use AI to be more productive, how they automate whatever they can, and why individual contributors are acting more like managers (of AI)Note: This interview was recorded months before the "SaaSpacolypse" (big market drop) of Feb 2026; the analysis is as relevant as ever. Chapters(01:51) - Why the next 18 months are the crucible for AI (04:09) - What hard evidence would actually prove AI ROI (not token counts!) (06:55) - Why it's so hard to measure AI's real impact (09:55) - Who can build a moat with AI? Winners will be in "odd places" (12:56) - Structural data advantages: why Waymo's edge is hard to replicate (14:34) - Coding agents and whether developers will become disillusioned with them (18:21) - Physical constraints: chips, data centers, power, and America's grid problem (21:25) - How the Gulf countries became an unexpected AI hub (28:32) - "Pragmatic addicts": why 75% of Americans distrust AI but use it anyway (32:15) - The narrative of AI can be very unappealing: heaven on Earth or dystopia (35:06) - How Azeem's team uses AI: augmentation vs. automation (40:36) - What should we be talking about besides AI? (44:16) - Sounds like science fiction: What Azeem can't believe is real and here today Links & Resources:Exponential View: https://www.exponentialview.co/Azeem's Boom or Bubble dashboard: https://boomorbubble.ai/Azeem's New York Times piece on America's electric grid challenge: https://www.nytimes.com/2024/12/28/opinion/ai-electricity-power-plants.htmlMore on the “MIT Study” claiming 95% of AI projects fail that Azeem and I both found to be really poorly done, but that is nonetheless is quoted by everyone: Here’s Azeem tearing the study apart with data: https://www.exponentialview.co/p/how-95-escaped-into-the-worldAnd here's me riffing with Kwaku Aning on it. You know why Azeem liked my take? Because I actually read the thing, unlike ~95% of the writers out there who just quoted that 95% number: https://www.futurearound.com/p/did-anyone-actually-read-that-mit-ai-study-that-made-the-markets-swoon-i-didSupport Future Around & Find OutGet the newsletter: https://www.futurearound.comBecome a paid subscriber and help future proof this thing!: https://www.futurearound.comSponsor the show? Are you looking to reach an audience of senior technologists and decision-makers? Email me: dan@modernproductminds.com---Music by Jonathan Zalben | 44m 58s | ||||||
Showing 25 of 122
Sponsor Intelligence
Sign in to see which brands sponsor this podcast, their ad offers, and promo codes.
Chart Positions
3 placements across 3 markets.
Chart Positions
3 placements across 3 markets.













