Insights from recent episode analysis
Audience Interest
Podcast Focus
Publishing Consistency
Platform Reach
Insights are generated by CastFox AI using publicly available data, episode content, and proprietary models.
Est. Listeners
Based on iTunes & Spotify (publisher stats).
- Per-Episode Audience
Est. listeners per new episode within ~30 days
10,001 - 25,000 - Monthly Reach
Unique listeners across all episodes (30 days)
25,001 - 75,000 - Active Followers
Loyal subscribers who consistently listen
5,001 - 15,000
Market Insights
Platform Distribution
Reach across major podcast platforms, updated hourly
Total Followers
—
Total Plays
—
Total Reviews
—
* Data sourced directly from platform APIs and aggregated hourly across all major podcast directories.
On the show
Recent episodes
Inference Got Cheap. Renegotiate Everything.
May 5, 2026
8m 36s
Agents Need a Boss
May 4, 2026
10m 41s
Agents Don't Go Rogue. They Inherit.
May 2, 2026
9m 22s
The Grown-Up Era Of Enterprise AI
May 1, 2026
9m 49s
The Stasi Took Decades. Meta Took A Week.
Apr 30, 2026
9m 56s
Social Links & Contact
Official channels & resources
Official Website
Login
RSS Feed
Login
| Date | Episode | Description | Length | ||||||
|---|---|---|---|---|---|---|---|---|---|
| 5/5/26 | Inference Got Cheap. Renegotiate Everything. | For eighteen months the story has been the same. AI is expensive, and getting more expensive. That story has inverted. The price of using AI, not building it, is collapsing, and most of your vendors are quietly hoping you do not notice.In this weekday brief, Stephen Forte teaches the single most important distinction in AI economics, walks through four pieces of evidence in eleven days that the price floor is cracking, and gives you three concrete moves for the contracts already sitting in your legal folder.What you'll learn:Training vs. inference. Training is medical school. Inference is every patient visit for the next forty years. Inference is north of ninety percent of what you actually pay.The chip split. Google announced TPU 8t for training and TPU 8i for inference on April 22. Nvidia, AMD, and AWS Trainium/Inferentia are all moving the same direction. F1 cars vs. delivery vans.The Nebius/Eigen deal. On May 1, Nebius paid $643M for a startup that does one thing: makes AI run inference faster and cheaper. Three months earlier they bought Tavily for $275M. Same theme.DeepSeek V4 (April 24). An open-weight Chinese model claims to close the gap with frontier reasoning at a fraction of the cost. Western vendors will discount or explain why they aren't.Anthropic at $900B. A $50B round only pencils if inference economics work at industrial scale. That is the bet.Models are splitting too. Frontier models are neurosurgeons. Distilled models (Haikus, Minis, Nanos) and mixture-of-experts architectures are nurse practitioners — 95% of the visits at 10% of the cost.Three moves for this week:Pull every AI vendor contract signed in the last eighteen months. Find the inference pricing line (per token, per request, per seat).Ask your CIO: what percentage of our AI workload could run on a smaller or distilled model? The honest answer is north of seventy percent.Open the renegotiation conversation now. Not at renewal. Vendors fighting for share will move on price.The training story made the headlines. The inference story makes the budget. For eighteen months you have been the seller's customer. As of last week, you are the buyer.Sources:Bloomberg — Nebius Agrees to Buy Startup That Makes AI Run Faster, Cheaper (May 1, 2026)TechCrunch — Google Cloud launches two new AI chips to compete with Nvidia (April 22, 2026)TechCrunch — DeepSeek previews new AI model that closes the gap with frontier models (April 24, 2026)Bloomberg — Anthropic Weighs Funding Offers at Over $900 Billion Valuation (April 29, 2026) | 8m 36s | ||||||
| 5/4/26 | Agents Need a Boss | Google is selling the enterprise agent control plane from the top down. Employees are building the AI workforce from the bottom up. In today's YPO Technology Network AI Brief, Stephen Forte connects those two moves and explains why CEOs need to stop asking which model is best and start asking who governs the work. Stories covered: Google's push to make Gemini Enterprise the control plane for enterprise AI agents Why agent governance is becoming a board-level operating question Writer's 2026 enterprise AI adoption data on AI elites, non-adopters, and shadow AI Gallup and HBR signals showing that employees are already building AI leverage from the bottom up The CEO takeaway: the model is not the moat. The operating system around the model is. Sources: Reuters, Writer, Gallup, Harvard Business Review. | 10m 41s | ||||||
| 5/2/26 | Agents Don't Go Rogue. They Inherit. | An AI coding agent at Amazon was given a bug to fix. It found a solution. It deleted and recreated the entire production environment. That is not the interesting part. The interesting part is Amazon's explanation: this was not an AI failure. It was user error, specifically misconfigured access controls. In the narrow technical sense, Amazon was right. Which is exactly the problem. This shorter weekend edition focuses on the real enterprise lesson: agents don't go rogue. They inherit. They inherit permissions, approval paths, stale documentation, and identity from systems that were built for humans. Key ideas in this episode: IAM, in plain English: identity and access management is the permissions system companies use to give rights to people, machines, services, and now agents. Permission inheritance: if an agent runs inside a human engineer's session, the authorization system may see only the human's authority. Knowledge inheritance: agents can industrialize stale wikis and outdated internal process docs at machine speed. Identity inheritance: if agents lack separate identities, audit logs compress machine decisions into human actions. Cost as the warning light: API retry storms and runaway compute are often control failures before they are AI failures. The practical question for leaders: where can an agent inherit a human's permissions, stale knowledge, human-only approval paths, or an audit identity that hides the machine? Sources: Breached.Company — Kiro incident analysis Barrack.ai — Amazon AI deleted production analysis CRN — AWS official Kiro response Fortune — Amazon retail incidents AWS — Agent Registry launch RocketEdge — agent cost incidents Hosted by Stephen Forte. | 9m 22s | ||||||
| 5/1/26 | The Grown-Up Era Of Enterprise AI | The honeymoon era of enterprise AI is over. Three stories landed this week that change the conversation in your boardroom from whether to do AI to how much it will cost you, who you will buy it from, and what the geopolitical risk looks like. In this episode: Microsoft and OpenAI restructure the most lucrative partnership in tech. Exclusivity is gone. OpenAI can sell on AWS within weeks, Google likely next. The real shift is architectural — Azure for stateless API calls, AWS for stateful agents — and what it means for the model decisions every CIO now has to make per workload. Tokenmaxxing is detonating cost structures. Uber exhausted its entire 2026 AI budget before May. Anthropic billed one user a hundred-fifty-thousand dollars in a single month. The killer insight: most token bills aren't a vendor problem, they're a model selection problem — and that decision happens at the prompt layer, not the procurement layer. China blocks Meta's Manus deal. Beijing's NDRC ordered Meta to unwind a two-billion-dollar acquisition with no justification. Singapore-washing is dead. If you have any cross-border AI M&A on your roadmap, your diligence playbook just changed. What I'd do this quarter: Re-open every multi-year Azure AI commitment signed under exclusivity assumptions. Name an AI FinOps owner with hard kill switches at the API layer. Reassess any cross-border AI M&A based on origin of talent and IP, not legal domicile. Sources: Microsoft — The next phase of the Microsoft-OpenAI partnership VentureBeat — Microsoft and OpenAI gut their exclusive deal Pragmatic Engineer — AI token spending out of control New York Times — Tokenmaxxing GitHub — Changes to Copilot individual plans TechCrunch — China vetoes Meta's $2B Manus deal Reuters — Blocking Meta's AI startup buy raises risk for cross-border China tech deals | 9m 49s | ||||||
| 4/30/26 | The Stasi Took Decades. Meta Took A Week. | Meta installed monitoring software on every U.S. employee laptop — keystrokes, clicks, periodic screenshots — to train AI agents that will replicate white-collar work. CTO Andrew Bosworth confirmed there is no opt-out. The same week, Meta confirmed 8,000 layoffs. Europe blocked the program at the border under GDPR. The United States did not. Stephen unpacks the deeper question every CEO is about to face: every company building internal AI agents needs proprietary training data. Where does yours come from? Three takeaways for your leadership team: Write the one-page workplace-monitoring policy now, before a vendor pitches the line and HR has to react in a meeting. Route this to the CHRO, not the CIO. It is a labor question wearing an IT costume. Map your proprietary workflow data this quarter. The cost curve on observation has collapsed; the question is what you will not ask for at any price. Sources: Platformer — Casey Newton on Meta's MCI program The Lives of Others (2006) — referenced in episode The YPO Technology Network AI Brief publishes Monday through Friday. Forward to a fellow member if it was useful. | 9m 56s | ||||||
| 4/29/26 | MCP Is The Plug. You Still Need The Outlet Cover. | MCP — Model Context Protocol — has gone from a curiosity to enterprise infrastructure in less than a year. Last Friday, the Linux Foundation made it official, formalizing MCP under its new Agentic AI Foundation alongside production integrations from SUSE, AWS, and Fujitsu. Translation: it is now the standard your engineers are building on. In this episode, Stephen Forte explains: What MCP actually is — the USB-for-AI analogy, in plain language, no developer experience required Why it became default — Anthropic, OpenAI, Google, Cursor, LangChain, LiteLLM, IBM LangFlow all support it Why it cannot be deployed alone — the protocol is open by design, and an open protocol without a wrapper is a powerful electrical outlet with no cover The AgentOps layer your team needs — gateway, identity, logging — same pattern as DevOps, new layer of the stack Three direct questions to ask your CTO this quarter, and why naming a single owner matters more than convening a committee Brex (the corporate-card and spend-management fintech) made the point cleanly this week with the open-source release of CrabTrap — a small proxy that watches every HTTP call an agent makes before it goes out. A 306-practitioner study published this month puts the urgency in numbers: 82% of organizations have agents in production or pilot, and the number-one cited challenge is reliability, not capability. The protocol your engineers are excited about is genuinely useful and genuinely standard. The work of making it safe to operate is a separate budget line and a separate skill set — and it is the price of admission for running this stuff in a real company. | 8m 34s | ||||||
| 4/28/26 | Google Just Built An HR System For Agents | Google retired Vertex AI in a single afternoon and replaced it with the Gemini Enterprise Agent Platform — what Sundar Pichai called "mission control for the agentic enterprise." Stephen Forte argues this is the moment AI agents got an HR system: cryptographic identity, a directory, an access gateway, and a performance review. In this episode: Why Vertex AI is gone — and what the replacement actually does The four pillars of the Agent Platform translated into HR terms (hire, deploy, supervise, review) The traction numbers Google disclosed: 40% QoQ growth, 8M seats, 2,800 enterprises The structural reveal: Anthropic crossed $30B annualized revenue — and is now Google Cloud's largest TPU customer Two concrete moves to make this quarter, plus one CEO-mirror question to leave you with The closing line: The compute will commoditize. The control plane will not. Sources: Google Cloud — Introducing Gemini Enterprise Agent Platform ComputerWeekly — Pichai mission-control framing Infosecurity Magazine — Kurian zero-trust quote Google Cloud Docs — Agent Identity overview Business Analytics Review — A2A protocol and Anthropic on TPU | 8m 35s | ||||||
| 4/24/26 | Twenty Agents, 1.2 Humans, 2.4 Million Closed | Most AI conversations happening in boardrooms right now are cost conversations — G&A reduction, procurement automation, headcount trimming. This episode takes the opposite angle. Jason Lemkin published the most detailed CEO-authored account of deploying AI across an entire sales and marketing operation, and the result is a growth story, not a savings story: $2.4 million closed, eight humans compressed to 1.2, twenty-plus agents running in parallel, and a monthly software bill under $5,000. In this episode: Why the cost-cutting frame is the wrong frame — and what the growth frame looks like in practice How SaaStr structured 20-plus agents as a workforce, each with a job description and a system of record The assembly sequence: inbound first, then enrichment and segmentation, then outbound — in that order What a machine-readable operating model actually means: 100 distinct segments across 1,000 target contacts The senior operator role the stack cannot run without — and why it is not a cost, it is a conductor Three companies across three verticals running the same structural move: SaaStr, Pump, and A-LIGN The stack, layer by layer: Salesforce + Agentforce — the CRM spine and AI agent layer that takes actions directly on records Qualified + Piper — inbound conversation handling; Piper is the AI sales agent running 24 hours a day on the website Clay — data enrichment platform that builds full buyer profiles from dozens of sources Artisan — autonomous outbound agent that writes and sends prospecting emails using enriched profiles Zapier — workflow orchestration layer connecting CRM, enrichment, inbound, outbound, and Slack Claude Opus via Replit — custom strategy layer built on Anthropic's model; runs as an AI VP of Marketing producing the morning brief Gamma — AI presentation tool that drafts decks from a brief when agents book meetings The numbers: $4.8 million in pipeline sourced first-touch by AI agents. $2.4 million closed from that same source. Team size moved from eight-to-nine humans down to 1.2. Total monthly cost for the connected stack: $2,000 to $5,000. Source: Jason Lemkin's original post — the eight-month postmortem that forms the basis of this episode. The AI Brief is a weekly episode from the YPO Technology Network, covering applied AI for CEOs and senior executives. New episodes every Monday and Friday. | 10m 34s | ||||||
| 4/23/26 | The Campfire Protocol: Replacing Your Old Salty Guy Before He Retires | The old salty guy problem. The senior operator who knows everything and is about to walk out the door with fifteen years of judgment. This episode is the framework for capturing what he knows before the fire goes out. No news cycle coverage today — we pivot to a single-thesis deep-dive on the retiring-expert problem. We introduce The Campfire Protocol, a 7-phase framework for turning tribal knowledge into an operational asset that survives the person. The stakes. Boeing 737 MAX: $1.6 billion in direct losses traced to lost institutional knowledge. Shell ROCK: $300 to $400 million per year in retained value. NASA, unable to recover its own spacesuit manufacturing expertise, awarded Axiom a $1.3 billion contract in 2022 to rebuild what it had lost. The 7 phases: CONSENT — the legal and personal permissions CORPUS — every artifact the expert has touched DISCOVERY — structured interviews on decision-making patterns INTERVIEW — recorded, transcribed, tagged ground truth SHADOW — AI watches the expert work for 30 to 90 days HANDOFF — the successor works with the AI for 90 days with the expert available STEWARDSHIP — ongoing maintenance so the knowledge base does not decay Failure and success cases: IBM Watson at MD Anderson — $62 million written off in 2017 Eudia at Duracell — outside counsel costs cut 50 percent by augmenting, not replacing NASA spacesuits — 19-year gap, full rebuild required Legal anchors: California AB 2602 and SB 683, Tennessee ELVIS Act, Moffatt v. Air Canada (2024), Mobley v. Workday (2025) class cert, iTutorGroup EEOC $365,000 settlement, DDB Technologies v. MLB (2008). The economics. Annual recurring: $18,000 to $24,000. One-time build: $70,000 to $175,000. Tooling: Guru, Dust.tt, Fathom, Fireflies, AssemblyAI, Microsoft Presidio, ElevenLabs PVC, Delphi.ai, Synthesia, HeyGen, D-ID. "The campfire does not scale. The campfire goes out." "You are not cloning a person. You are keeping the fire." "The goal is to never lose the conversation." If this was useful, send it to a fellow member. Stay sharp. | 14m 11s | ||||||
| 4/22/26 | AI Just Made Your Disgruntled Barista Dangerous | The UK government quietly confirmed an AI model just completed the hacking equivalent of a four-minute mile. Eleven of the largest companies on Earth already have a copy. The threat model you were operating under on Friday is not the one you are operating under today. In this episode: What Claude Mythos actually did on AISI's 32-step "Last Ones" test — and why Anthropic's own safety team called it "the greatest alignment-related risk" they've released The Roger Bannister four-minute mile analogy — why one lab crossing a capability barrier changes what every other lab believes is possible Project Glasswing — the eleven companies with access (AWS, Apple, Cisco, CrowdStrike, Google, JPMorgan, Microsoft, NVIDIA, Palo Alto Networks, Goldman Sachs, Linux Foundation) and the oversight framework that isn't public Why your threat model shifted from nation-states to "everyone who has ever been angry at you and kept a copy of something" The three-step playbook to ask about by Friday: kill switches (1-10-60 rule, CrowdStrike/SentinelOne/Defender isolation), agentic security platforms reading your logs 24/7, and immutable 3-2-1-1 backups (Veeam, Rubrik, Commvault, AWS S3 Object Lock) The CEO mirror — a three-column credential audit to take into your next forum meeting Key line: "The tool does the skill. The tool does the twenty hours of work. A motivated amateur with a Claude API key and a grudge is now a credible threat." Cybersecurity used to be a specialist problem. It is now an operational problem. It belongs in the same meeting as insurance and succession. The YPO Technology Network AI Brief is a daily, peer-to-peer podcast for YPO members (CEOs and Presidents of $13M+ companies) making sense of AI without the hype. Produced by BuildClub. | 12m 36s | ||||||
Want analysis for the episodes below?Free for Pro Submit a request, we'll have your selected episodes analyzed within an hour. Free, at no cost to you, for Pro users. | |||||||||
| 4/21/26 | Give Your AI Its Own Identity | Episode summary. Sam Altman says a world-shaking AI cyberattack is coming within twelve months. The proof of concept arrived this weekend: one Roblox download on a personal device triggered a three-company breach that ended with Vercel's source code, GitHub tokens, and NPM publishing keys for sale on BreachForums. Stephen Forté connects the warning, the breach, and the architectural fix most companies have not yet implemented — giving every AI agent, tool, and integration its own machine identity.Why this matters. AI is no longer a tool sitting next to your business. AI is the attack surface. The new physics is clear: your security perimeter now includes every AI tool used by every vendor of every employee of every customer. The fix is not another seat license — it is plumbing, and your CIO can implement it this quarter.What this episode covers:Sam Altman's Axios interview and why frontier-lab safety data backs the warning — Anthropic's 99% valid zero-day finding rate, and the $2,283 / 20-hour discovery of Chrome CVE-2026-5873.The Vercel breach chain of custody: Lumma Stealer → Context.ai OAuth tokens → Vercel mailbox → GitHub + NPM. 580 employee records, undisclosed API keys, sold by ShinyHunters for $2M.The GitGuardian 2026 numbers: 28M hardcoded secrets exposed in 2025, AI credentials up 81% YoY, 24,000 unique creds leaked from MCP config files alone.The architectural fix: machine identity and agent-level authentication — treating every AI tool, agent, and integration as its own authenticated principal rather than sharing an employee's OAuth token.The three questions to take to your CIO and CISO this week.Key takeaway. The breaches coming in 2026 will not look like the breaches of 2024. The attacker does not need to beat your security team. The attacker walks through three companies on a single thread of inherited AI trust. Identity is the new perimeter — and AI agents need identities of their own.Hosted by Stephen Forté for the YPO Technology Network. | 11m 29s | ||||||
| 4/20/26 | AI Just Made Your Company Fully Discoverable | Episode summary. On February 17, 2026, federal Judge Jed Rakoff issued the first nationwide ruling holding that conversations with consumer AI chatbots are not protected by attorney-client privilege and are fully discoverable in litigation. Six weeks later, the Delaware Court of Chancery used a CEO's deleted AI chat logs as trial evidence in a $250 million earnout dispute. This episode walks CEOs, GCs, and CISOs through what the courts actually held, what it means for your company in practice, and the five specific moves to make this week.Why this matters. Every prompt your employees type into ChatGPT, Claude, Gemini, or Copilot is now a timestamped, logged document living on a third party's servers under terms that explicitly permit disclosure to regulators and courts. The candor of AI conversations — precisely because employees feel they are thinking in private — makes them disproportionately damaging in discovery. This is the AI wake-up call, and it lands harder than email did in the 2000s or Slack did in the 2010s.The Four Rulings You Need to Know1. United States v. Heppner — No. 25 Cr. 503 (JSR), 2026 WL 436479 (S.D.N.Y. Feb. 17, 2026). Judge Jed S. Rakoff, Southern District of New York. The anchor case. Bradley Heppner, former Chair of GWG Holdings, was indicted for securities fraud allegedly costing investors more than $150 million. Facing a grand jury subpoena, he used the free version of Anthropic's Claude to generate 31 documents analyzing his defense strategy and shared them with Quinn Emanuel. FBI agents seized the documents during a Dallas search warrant. The government moved to compel. Rakoff — calling it "a question of first impression nationwide" — ruled the documents were not privileged on three independent grounds and found they may have even waived privilege over the original attorney-client communications Heppner had pasted into Claude.2. Fortis Advisors LLC v. Krafton, Inc. — C.A. No. 2025-0805-LWW (Del. Ch. Mar. 16, 2026). Delaware Court of Chancery, Vice Chancellor Will. Krafton acquired Unknown Worlds Entertainment (maker of Subnautica) for $500M up front plus a $250M earnout. When the deal soured, Krafton's CEO used an AI chatbot to draft a "Response Strategy to a No-Deal Scenario" including a "pressure and leverage package" and a "two-handed strategy" combining legal pressure with softer retention offers. The court quoted the AI logs extensively to establish pretextual intent — and noted the CEO's admitted deletion of some logs may "factor prominently" in the damages phase. Civil discovery, not criminal. The reasoning travels.3. Warner v. Gilbarco, Inc. — No. 2:24-CV-12333, 2026 WL 373043 (E.D. Mich. Feb. 10, 2026). Magistrate Judge Anthony P. Patti. A pro se plaintiff in an employment discrimination case used ChatGPT to prepare filings. The court upheld work product protection on narrow facts — a pro se litigant is the party, FRCP Rule 26(b)(3)(A) protects party-prepared materials, and uploading to an AI tool is not disclosure to an adversary. This is not a circuit split with Heppner (different context, criminal vs. civil, represented vs. pro se), but it is the only counterweight on the books.4. Morgan v. V2X, Inc. — No. 1:25-cv-01991 (D. Colo. Mar. 30, 2026). Magistrate Judge Maritza Dominguez Braswell. A modified protective order establishing the precise contractual checklist any AI tool must meet before confidential discovery materials can be loaded into it: (1) no training on inputs, (2) strict confidentiality, (3) contractual right to delete. The court acknowledged this effectively bars most consumer AI tools from discovery-sensitive workflows.5. In re OpenAI Copyright Litigation — S.D.N.Y. Jan. 5, 2026. The court upheld a discovery order requiring OpenAI to produce a sample of 20 million de-identified ChatGPT conversation logs. Confi | 15m 23s | ||||||
| 4/18/26 | The Redesign Layoffs | Healthy-company layoffs are no longer just a lagging indicator of weakness. In this weekend edition, Stephen Forte argues they can be an early signal of organizational redesign — and explains what mid-market CEOs should do before the pressure shows up in their numbers.What this episode covers:Why this wave of layoffs is different from 2009 and different from the 2023 over-hiring correctionWhy many strong companies are redesigning around new information economics, not just cutting costsWhy most mid-market firms should not copy Block directlyThe pattern Stephen sees across successful and failed AI adoption effortsA practical 90-day playbook for CEOs: pick two workflows, map them properly, run shadow mode, define decision rights, and learn from overridesKey idea: the real shift is not AI as a tool. It is AI as a change to how context moves, how decisions get made, and what parts of management remain valuable.If your company is healthy, that is not a reason to delay this work. It may be the best reason to start it. | 13m 27s | ||||||
| 4/17/26 | Saboteurs Are Why Your AI Fails | Stephen Forte explores why AI investments are failing and the answer is not what you think. Drawing on the CIA 1944 Simple Sabotage Field Manual and a landmark 2026 survey showing 29 percent of employees actively sabotage their company AI strategy, he unpacks the invisible resistance destroying AI ROI.The CIA Manual: How 80-year-old bureaucratic sabotage tactics are alive and well in your AI steering committeeThe Data: 29 percent sabotage rate (44 percent among Gen Z), plus a 30-point perception gap between executives and employeesThe Failure Landscape: 95 percent of AI pilots deliver zero ROI (MIT), with BCG attributing 70 percent of failure to people, not technologyThe Fear Factor: 89 percent of workers worried about job security, 55,000 AI-related layoffs in 2025The Spectrum of Resistance: From overt refusal to invisible pretenders, plus the vicious cycle that makes sabotage look like technology failureThe Solution: Champion networks achieve 3x implementation success. Find the domain experts already using AI on their ownKey insight: The programming language of this era is English. The real skill is domain expertise. Find your champions, reward them, and let your laggards self-select out.Sources: Writer/Workplace Intelligence 2026 Survey, MIT NANDA Initiative, BCG, RAND Corporation, ADP Research, Aalto University, CIA Simple Sabotage Field Manual (1944) | 9m 31s | ||||||
| 4/16/26 | CEO Silence Costs More Than AI | Today, one thread ties together a thousand layoffs at Snap, a survey showing the majority of C-suite leaders admitting AI is fracturing their organizations, and Molotov cocktails thrown at a tech CEO home. That thread is the cost of what you, as a leader, have not yet said.Snap cuts 1,000 jobs (16% of workforce) citing AI productivity. CEO Evan Spiegel was direct. Most CEOs have not been.Writer 2026 survey of 2,400 executives: 54% of the C-suite say AI is tearing their company apart. 97% deployed agents, only 29% see ROI. 35% cannot shut down a rogue agent.Physical attacks on AI leaders: Molotov cocktails at Sam Altman home, 13 bullets through an Indianapolis councilman front door over a data center vote.The thesis: Having no AI policy is a policy. You are just letting fear set it for you.Hosted by Stephen Forte. | 9m 37s | ||||||
| 4/15/26 | A Free AI Tool Just Breached 600 Firewalls | Every adoption metric just crossed the line — and the line turns out to be behind us. Three stories about AI adoption outrunning governance at a pace no one predicted.Stories covered:The 50% Line — Gallup's Q1 2026 workplace survey of 23,717 employed adults finds 50% now use AI at work, up from 46% last quarter. But only 41% of organizations have formally integrated AI — meaning roughly 14 million American workers are using AI tools their employer hasn't approved or secured.CyberStrikeAI: 600 Firewalls in 5 Weeks — A free, open-source AI tool autonomously compromised 600+ Fortinet FortiGate firewalls across 55 countries. No zero-day vulnerabilities needed — just exposed management interfaces and weak authentication. The barrier to autonomous cyberattack just dropped to zero dollars and a laptop.96% Agents, 12% Governed — OutSystems surveyed 1,900 IT leaders: 96% are already using AI agents in production, but only 12% have centralized governance. Gartner forecasts 40% of enterprise applications will include task-specific agents by end of 2026, up from 5% in 2025.Action items:Ask your CISO about exposed management interfaces and single-factor authentication gaps — today, not next quarterFind out what percentage of your workforce is using AI tools IT hasn't provisionedCount your agents — if nobody can give you a number, that is the number that matters mostHosted by Stephen Forte. New episodes weekdays. | 9m 23s | ||||||
| 4/14/26 | Musk Made Banks Buy Grok. Here's Why You're Next. | Three stories about how AI companies stopped competing on capability and started competing on leverage — and what the squeeze means for every CEO writing checks right now.Stories covered:Musk's Grok Toll Booth — The New York Times confirmed Elon Musk is requiring every bank advising the SpaceX IPO to purchase Grok enterprise subscriptions. Goldman Sachs, JPMorgan, Morgan Stanley, and others have committed tens of millions. Not because Grok won a bake-off — because the alternative is losing access to $500M+ in advisory fees from a $50B+ raise.GPU Prices Surge 48% — The Ornn Compute Price Index shows Nvidia Blackwell GPU rentals now cost $4.08/hour, up from $2.75 eight weeks ago. Half of planned 2026 data center builds are delayed — not by chips or capital, but by 5-year lead times on high-voltage electrical transformers.OpenAI Kills Sora — OpenAI is discontinuing its video generation tool with roughly six months notice. A Futurum Group survey found 61% of enterprises cite OpenAI as their primary generative AI platform — raising hard questions about single-vendor dependency.Action items:Lock in compute contracts before the next price jumpBuild optionality into your vendor stack before a deprecation notice forces your handIf 40%+ of your AI workloads run on a single vendor, draft a migration playbook nowHosted by Stephen Forte. New episodes weekdays. | 8m 53s | ||||||
| 4/13/26 | Control Is the Illusion AI Sells Best | Three stories exploring the gap between what we believe and what the data shows in AI.Anthropic Mythos / Project Glasswing — An AI model too dangerous to release is now controlled by eleven handpicked organizations and the White House. That is not a safety framework. That is a guest list.OpenAI Acquires TBPN — OpenAI spent hundreds of millions to buy a podcast. It reports to their chief political operative. The sole financial relationship is now OpenAI. When you cannot control the narrative through technology, you buy the megaphone.AI Coding Quality Collapse — Six independent studies converge on the same finding: AI-generated code has more bugs, and developers using it believe they are faster when they are actually 19% slower. The 39-point perception gap is the largest ever documented. | 10m 49s | ||||||
| 4/11/26 | Managed Agents: The Infrastructure Barrier Just Dropped | Weekend Special Edition | Saturday, April 11, 2026Anthropic launched Claude Managed Agents in public beta on April 9, 2026. The infrastructure problem that was killing enterprise agent projects between prototype and production is now a managed service. This episode goes deep on what changed and what to do about it.What we cover:Claude Managed Agents: four core capabilities — secure sandboxing, long-running autonomous sessions, multi-agent coordination (research preview), and a full governance layer. Pricing: standard token rates plus $0.08/session-hour.The three-agent harness: Planner expands your 1-4 sentence prompt into a full product spec. Generator builds in sprint rounds. Evaluator interacts with the live application via Playwright — clicking through UI, testing API endpoints, checking database states — and grades output against calibrated thresholds, running 5-15 iteration cycles until complete.The context problem solved: externalized state via JSON specs, progress logs, and git commits rather than in-context memory. The Ralph Loop prevents premature completion claims.Early adopters: Notion, Asana, Rakuten (10x faster agent delivery, 22-point task success improvement), Vibecode.The five-point executive playbook: find your stalled agent project, scope by workflow not AI capability, separate generators from evaluators in every AI process, design governance before scaling, get on the multi-agent coordination waitlist at claude.ai.Hosted by Stephen Forte, YPO Tahoe Integrated, YPO Miami Gold, YPO London Gold | 14m 42s | ||||||
| 4/10/26 | OpenAI's Pre-Apology for the AI Jobs Crisis | OpenAI published a 13-page policy paper on April 7, 2026 — the same morning The New Yorker published a 1.5-year investigation into Sam Altman's trustworthiness on AI safety. This episode reads OpenAI's proposals not as forward-looking policy, but as a pre-apology for disruption that is already underway and already documented.In this episode:What OpenAI is actually proposing: a four-day work week, a Public Wealth Fund, a robot tax, worker voice mechanisms, and mandatory AI safety auditingHow each proposal maps to a specific, documented harm — including 60,000 job cuts in March alone and $852 billion in AI-driven capital concentrationOpenAI's two-year lobbying record against the exact safety policies the paper now endorsesThe timing collision: the policy paper and the New Yorker investigation dropped on the same dayWho is funding the D.C. think tanks that will define responsible AI policyA closing question for every CEO: could your company write the equivalent internal document?Sources:OpenAI — Industrial Policy for the Intelligence AgeTechCrunch — OpenAI's vision for the AI economyFortune — Sam Altman says AI needs a New DealAbout the show: The YPO Technology Network AI Brief is a daily podcast for YPO members — CEOs and company presidents — covering AI developments with direct business impact. Hosted by Stephen Forte. | 8m 29s | ||||||
| 4/9/26 | One Employee Destroyed a Warehouse. Now Imagine Your Network. | One Employee Destroyed a Warehouse. Now Imagine Your Network. | April 9, 2026A Kimberly-Clark warehouse in Ontario, California is gone — 1.2 million square feet, total loss — because one employee had access, motive, and fuel that was already in the building. This episode traces that pattern from the physical world into the digital: 500,000 tech layoffs coming this year, the SolarWinds supply chain attack explained, and last week’s AI-era version of the same breach — 40 minutes, three major AI labs in the blast radius simultaneously.What we cover:The Ontario warehouse fire: Chamel Abdulkarim, 29, arrested on felony arson charges after destroying a 1.2M sq ft Kimberly-Clark distribution center serving 50 million peopleThe layoff fuse: 78,557 tech cuts in Q1, 9x increase forecast this year — every departing employee walking out with system knowledge, credentials, and potentially still-active accessSolarWinds explained: Russian intelligence spent 14 months inside US government networks — Treasury, Homeland Security, State, DOE — through a trusted update that 18,000 organizations installed voluntarily. $90M+ recovery. First CISO ever charged by the SEC.AI’s SolarWinds: LiteLLM poisoned on PyPI for 40 minutes, cascading to Mercor — supplier to OpenAI, Anthropic, and Google simultaneously — 4TB claimed stolenThree actions: offboarding access audit, AI supply chain dependency monitoring, AI-powered log monitoringKey data:1.2M sq ft warehouse, total loss — one person, no specialized skills78,557 Q1 tech layoffs | 47.9% attributed to AI | 9x increase forecast 2026SolarWinds: 18,000 orgs | 14 months undetected | $90M+ recovery | 11% avg revenue impactLiteLLM attack: 40 minutes active | all 3 top US AI labs in blast radius | 4TB claimedIBM X-Force: 4x increase in supply chain attacks since SolarWindsSources:LA Times: Kimberly-Clark Warehouse FireTom’s Hardware: Q1 2026 Tech LayoffsBreachsense: SolarWinds Case StudyMercor/LiteLLM BreachMandiant: SolarWinds SUNBURST AnalysisHosted by Stephen Forte, YPO Tahoe Integrated, YPO Miami Gold, YPO London Gold | 10m 11s | ||||||
| 4/8/26 | AI Just Made Your Disgruntled Employee Dangerous | The Citizen Hacker | April 8, 2026Anthropic built an AI model so capable at finding security vulnerabilities that it cannot be released to the public. Claude Mythos Preview has already found thousands of high-severity flaws in every major operating system and browser, including a 27-year-old bug that survived decades of expert review. This episode unpacks what that signals about corporate security today, introduces the citizen hacker, and closes with five specific moves every company needs to make before this month is out.What we cover:The model Anthropic won't release: what Claude Mythos found, and what it means that it found these flaws entirely autonomouslyThe reality check: 94% of passwords reused, breaches taking 328 days to detect, hackers paying employees up to $15,000 for network accessThe citizen hacker: how vibe coding's mirror image is already attacking companies at scaleThe five moves: credential audit, AI log monitoring, agent governance, behavioral monitoring, continuous patchingKey data:74-95% of breaches involve the human element (Verizon / SentinelOne 2025)Average credential breach detection: 328 daysTime-to-exploit: negative one day (Mandiant 2025)Insider risk: $19.5M per organization annually (Ponemon 2026)Attacker breakout time: 29 minutes, down 65% (CrowdStrike 2025)Global ransomware damage: $74 billion in 2026 (Cybersecurity Ventures)Sources:Anthropic Project GlasswingSecureframe 2026 Data Breach StatisticsMandiant: Negative Time-to-ExploitPonemon/DTEX 2026 Cost of Insider RisksForrester: Vibe Hacking and No-Code RansomwareCybersecurity Ventures: Ransomware Damage 2026Hosted by Stephen Forte, YPO Tahoe Integrated, YPO Miami Gold, YPO London Gold | 11m 24s | ||||||
| 4/7/26 | The Everywhere Bot: Every Enterprise Tool Is Spawning an Agent | This episode of the YPO Technology Network AI Brief, hosted by Stephen Forte, maps the agent explosion happening across every major enterprise platform — and explains why the right move is neither consolidation nor inaction.Key topics covered:Why Salesforce, Notion (21,000+ custom agents), Jira, Zoom, monday.com, and Asana all shipped autonomous agents in the same quarterThe governance crisis: 3M+ corporate AI agents in deployment globally, with only 47% monitoredScenario: Velocity Digital (400-person agency) discovers 31 unauthorized agents running for six weeksThe experimentation thesis: why picking one agent now is the wrong moveScenario: Meridian Financial's 90-day, $180K experiment generates a projected $2.1M annual productivity gainFour structural differentiators: model flexibility, local access, data connectivity, and governance surfaceArthur AI's Agent Discovery platform as an early governance responseQuotable close: "The window for informed experimentation is roughly 90 days before market consolidation starts making the decision for you."Hosted by Stephen Forte for the YPO Technology Network. | 9m 33s | ||||||
| 4/6/26 | Microsoft's Multi-Model Copilot: When AI Argues With Itself | In this episode of the YPO Technology Network AI Brief, Stephen Forte examines Microsoft's multi-model Copilot rollout — one of the most substantive architectural changes in enterprise AI this year. The episode covers what's deploying now, what goes generally available May 1, and why the gap between Microsoft's installed base and active usage is a change management problem, not a technology problem.Key topics covered:Multi-model Copilot: Critique and Council modes — GPT and Claude reviewing each other's work, producing a 13.8% improvement on the DRACO research benchmark; Council mode runs multiple models in parallel and synthesizes where they agree and divergeCopilot Cowork and Agent 365 — long-running agentic work that continues after you close the browser, currently in the Frontier program with Capital Group; Agent 365 goes GA May 1 at $15/user/monthThe adoption gap — Microsoft has 400 million installed users but only 15 million paid Copilot seats (3.3% penetration); of those, only 35.8% are actively using the product versus ChatGPT Enterprise's 83.1% activation rateCopilot Studio model marketplace — April GA brings a platform where enterprise developers can orchestrate Claude, GPT, and Grok models against internal data via Fabric integration and the Agent-to-Agent protocolPricing referenced:Agent 365: $15/user/month (GA May 1)Microsoft 365 E7 bundle (E5 + Copilot + Agent 365): $99/user/month (GA May 1)Copilot enterprise: $30/user/month; SMB: $21/user/monthHosted by Stephen Forte for the YPO Technology Network. | 11m 00s | ||||||
| 4/4/26 | The AI Hire Everyone Is Getting Wrong | This week's episode goes deep on one of the most consequential hiring decisions in your organization right now: who should be leading your AI transformation — and why the instinct to hire a senior technology executive is almost certainly wrong.Key topics covered:Why 88% of companies using AI are seeing almost no return on the investmentThe failure pattern: AI pilots that run for 18 months and never touch a real workflowBCG's 10-20-70 rule — why 70% of AI value comes from process change, not the algorithmIBM Watson Health: a $62 million cautionary tale about the wrong kind of leadershipThe AI Operating Partner model emerging in private equityThe "anchor employee" hiding in your organizationThe citizen developer revolution: Accenture's 50,000 internal buildersThe constellation model vs. bloated enterprise platformsGovernance that keeps it from becoming shadow IT chaosHost: Stephen Forte | 16m 32s | ||||||
Showing 25 of 55
Sponsor Intelligence
Sign in to see which brands sponsor this podcast, their ad offers, and promo codes.
Chart Positions
20 placements across 20 markets.
Chart Positions
20 placements across 20 markets.
















