
Insights from recent episode analysis
Audience Interest
Podcast Focus
Publishing Consistency
Platform Reach
Insights are generated by CastFox AI using publicly available data, episode content, and proprietary models.
Est. Listeners
Based on iTunes & Spotify (publisher stats).
- Per-Episode Audience
Est. listeners per new episode within ~30 days
1 - 1,000 - Monthly Reach
Unique listeners across all episodes (30 days)
1 - 5,000 - Active Followers
Loyal subscribers who consistently listen
1 - 500
Market Insights
Platform Distribution
Reach across major podcast platforms, updated hourly
Total Followers
—
Total Plays
—
Total Reviews
—
* Data sourced directly from platform APIs and aggregated hourly across all major podcast directories.
On the show
Recent episodes
CapEx is just Memory Tax Now, Deepseek V4 NAND impact
May 4, 2026
45m 53s
Masterclass on Google's TPU v8 Networking
Apr 24, 2026
46m 59s
Meta VP Matt Steiner on Ads Infra, GPUs, MTIA, and LLM-Written Kernels
Apr 20, 2026
39m 56s
Credo + Dust Photonics, XPO, Nuvacore
Apr 17, 2026
38m 10s
Is Intel Finally Back with a $300B market cap? OpenClaw can Dream?
Apr 10, 2026
34m 23s
Social Links & Contact
Official channels & resources
Official Website
Login
RSS Feed
Login
| Date | Episode | Description | Length | ||||||
|---|---|---|---|---|---|---|---|---|---|
| 5/4/26 | CapEx is just Memory Tax Now, Deepseek V4 NAND impact | The hyperscaler memory tax quarter. More CapEx? Pssh. We knew flops needed scaling. But $25B at Microsoft alone just to pay higher component prices? A memory tax. That's the news. NAND? Sold out. HBM? Sold out. What we cover: SanDisk revenue +97% sequential.78% gross margin. Guidance above 80% next quarter.Samsung HBM4 first to ship. Demand outstripping supply.DeepSeek v4 goes SSD-centric. KV cache offloads to flash.Microsoft: $25B of 2026 CapEx is just memory pricing.Jassy: memory shor... | 45m 53s | ||||||
| 4/24/26 | Masterclass on Google's TPU v8 Networking | Google's Cloud Next 2026 keynote? Fire. 🔥 The TPU is now two chips instead of one — 8t for training, 8i for inference — but more interestingly, it's two scale-up networking topologies too. Austin Lyons (Chipstrat) and Vik Sekar (Vik's Newsletter) walk through what actually changed, one day after the announcement. OCS? Yes. AECs? Yep. Copper? Yep. Optics? Yep. We cover Virgo (Google's 47 petabit/second scale-out fabric, built entirely on OCS), Boardfly (the new scale-up topology for MoE infere... | 46m 59s | ||||||
| 4/20/26 | Meta VP Matt Steiner on Ads Infra, GPUs, MTIA, and LLM-Written Kernels | Matt Steiner, VP of Monetization Infrastructure, Ranking & AI Foundations at Meta, walks through how Meta's ad system actually works, and why the infrastructure behind it differs from what you'd build for LLMs. We cover Andromeda (retrieval on a custom NVIDIA Grace Hopper SKU Meta co-designed), Lattice (consolidating N ranking models into one), GEM (Meta's Generative Ads Recommendation foundation model), and the adaptive ranking model, a roughly one-trillion-parameter recommender served ... | 39m 56s | ||||||
| 4/17/26 | Credo + Dust Photonics, XPO, Nuvacore | Austin and Vik discuss Credo's acquisition of Dust Photonics, XPO as the new standard for scale-out (maybe instead of CPO?) and some thoughts about Nuvacore entering the CPU scene for agentic AI. Gavin Baker's tweet: https://x.com/GavinSBaker/status/2044410644301046031?s=20 Vik's Substack: https://www.viksnewsletter.com Austin's Substack: https://www.chipstrat.com Chapters 00:00 Introduction to the Semiconductor Landscape 02:49 The Rise of Nuvacore and CPU Innovations 05:27 The Demand for... | 38m 10s | ||||||
| 4/10/26 | Is Intel Finally Back with a $300B market cap? OpenClaw can Dream? | In this episode, Austin and Vik discuss if Intel is finally back with CPU partnerships with Google, and heterogeneous inference with SambaNova, while market cap soars above $300B. Vik tries to get his OpenClaw instance to dream every night. Chapters 00:00 Anthropic's New Direction: Chip Development 02:30 Navigating Subscription Changes and Token Costs 05:25 Exploring Alternative AI Models 08:10 The Economics of AI: Rent vs. Buy 10:56 Intel's Resurgence and Market Dynamics 15:23 Intel's Stra... | 34m 23s | ||||||
| 4/9/26 | Reiner Pope (MatX): Designing AI Chips From First Principles for LLMs | Reiner Pope is the co-founder and CEO of MatX, the startup building chips designed from first principles for LLMs. Before MatX, Reiner was on the Google Brain team training LLMs, and his co-founder Mike Gunter was on the TPU team. They left Google one week before ChatGPT was released. A counterintuitive throughput insight from the conversation: “Low latency means small batch sizes. That is just Little’s law. Memory occupancy in HBM is proportional to batch size. So you can actually fit longer... | 38m 57s | ||||||
| 4/7/26 | $300M for 70K Viewers | Intel x Elon, OpenAI x TBPN, Citrini's Strait of Hormuz Stunt | Intel Foundry just partnered with Elon Musk’s Terafab. What is Terafab anyway, why vertically integrated fabs make sense but the economics don’t (yet!), and what Intel is doing here (hint: no idea). Then: OpenAI acquires TBPN for an estimated $100-300M. Not sure why, but the more interesting thing is the value of niche audiences when five companies control a trillion dollars in AI capex. And finally, Citrini Research sent an analyst to the Strait of Hormuz with a Pelican case full of spy gear... | 36m 14s | ||||||
| 4/3/26 | NVIDIA's Marvell Strategy, Is Memory Different This Time?, Intel's Ireland Fab | In this episode, Austin and Vik analyze NVIDIA's $2 billion investment in Marvell NVLink Fusion, exploring its implications for AI infrastructure, interconnect protocols, and the broader chip ecosystem. They also discuss the current memory market surge, DRAM pricing, and Intel's strategic fab buyback, providing deep insights into industry trends and future directions. On Substack Vik: https://www.viksnewsletter.com/ Austin: https://www.chipstrat.com/ Chapters 00:00 NVIDIA's $2 Billion Inve... | 42m 01s | ||||||
| 3/27/26 | ARM AGI CPU has entered the chat, TurboQuant thrashes memory stocks | In this episode, Austin and Vik analyze recent developments in GloFo patent lawsuits, the impact of TurboQuant on AI inference, and ARM's strategic move into silicon for agentic AI workloads. Read Vik's substack: https://www.viksnewsletter.com Read Austin's substack: https://www.chipstrat.com Chapters 00:00 Patent Wars in Semiconductor Industry 07:14 Understanding TurboQuant and Its Implications 24:42 Innovations in Memory Management 28:00 The Rise of ARM AGI CPUs 32:56 Agentic AI and CPU... | 52m 15s | ||||||
| 3/20/26 | MicroLEDs Ain’t Dead, Micron Snags Vera Rubin | Austin and Vik break down a packed week in semiconductors, covering GTC, OFC, and Micron earnings. The conversation kicks off with Jensen Huang's bold claim that engineers should spend $250K/year on AI tokens, and whether companies will buy tokens or token generators (i.e., on-prem hardware like the Dell Pro Max with GB300). They dig into the CapEx vs OpEx tradeoffs, data security concerns, and how sharing GPU resources might end up looking a lot like the old EDA license model. Next up: Micr... | 43m 05s | ||||||
Want analysis for the episodes below?Free for Pro Submit a request, we'll have your selected episodes analyzed within an hour. Free, at no cost to you, for Pro users. | |||||||||
| 3/17/26 | Quick Takes: Nvidia Keynote at GTC | Vik and Austin unpack the Nvidia GTC keynote with fresh, top-of-mind takes while trying to breakdown key announcements, what matters and what doesn't. They discuss Groq's LPX, optics+copper for scale up, new CPU requirements, CPO for networking, and what agents means for software, and much, much, more. Check out Austin's substack: https://www.chipstrat.com Check out Vik's substack: https://www.viksnewsletter.com Chapters 00:00 Introduction and Keynote Context 03:18 Keynote Highlights and G... | 58m 48s | ||||||
| 3/13/26 | Meta's Inference Accelerator & Applied Optoelectronics (AAOI) | Austin recaps moderating an agentic AI panel at Synopsys Converge, then gives an in-depth technical breakdown of Meta's MTIA custom silicon. Why they're building it, how chiplets let them ship a new chip every 6 months, and how the roadmap is shifting toward gen AI inference. Vik digs into Applied Optoelectronics (AAOI), the vertically integrated Texas laser shop whose stock went from $1.48 to $100+, and whether history is about to rhyme. ... | 1h 01m 56s | ||||||
| 3/7/26 | The Great Optics-Copper Crossroads | This week, Austin and Vik break down the optics vs. copper debate that rocked semis this week. Nvidia dropped $4 billion on Lumentum and Coherent, Credo posted a blowout quarter betting on copper, and then Hock Tan shocked everyone claiming 400G per lane works over copper in Broadcom’s labs — potentially pushing CPO out to 2030+. Plus, Vik’s 4D chess conspiracy theory on why Hock Tan is talking up copper when Broadcom is a CPO company. Like, subscribe, and drop your thoughts on the copper vs... | 48m 16s | ||||||
| 2/27/26 | Optical Supply Chain: What would you buy? | This week, we move from optics technology to optics companies. We walk the AI optical supply chain from bottom to top. Main debate: Who has a moat? Who is already priced for perfection? *Not investment advice, do your own due diligence* AXTI - Indium phosphide substrate supplier. Critical bottleneck in the laser stack. Major China export-control risk. Massive stock run vs thin earnings. Tower Semiconductor - Leading silicon photonics foundry. 5x capacity expansion with customer prepayme... | 1h 01m 42s | ||||||
| 2/20/26 | Optical Networking Supercycle - ALL the Tech You NEED to know | Austin and Vik delve into the evolving landscape of optics and networking, particularly in relation to AI and data centers. The conversation covers various scales of networking, including scale across, scale out, and scale up, while also addressing the demand-supply dynamics in laser manufacturing and the future of optical circuit switches. The episode highlights the technological advancements and market opportunities in the optics sector, emphasizing the significance of these development... | 46m 08s | ||||||
| 2/13/26 | Memory Mayhem & AI Capex Madness | In this episode of the Semi Doped podcast, Austin and Vik delve into the current state of the semiconductor industry, focusing on the memory crisis driven by increasing demand from AI applications. They discuss the implications of rising memory prices, the impact of hyperscaler spending on the market, and the strategic moves of major players like Google, Microsoft, Meta, and Amazon in the AI landscape. Takeaways Memory prices are skyrocketing, impacting consumer electronics.The memory crisi... | 58m 53s | ||||||
| 2/10/26 | The future of financing AI infrastructure with Wayne Nelms, CTO of Ornn | In this episode, Vik and Wayne Nelms discuss the emerging financial exchange for GPU compute, exploring its implications for the AI infrastructure market. They discuss the value of compute, pricing dynamics, hedging strategies, and the future of GPU and memory trading. Wayne shares insights on partnerships, the depreciation of GPUs, and how inference demand may reshape hardware utilization. The conversation highlights the importance of financial products in facilitating data center deve... | 40m 38s | ||||||
| 2/6/26 | A New Era of Context Memory with Val Bercovici from WEKA | Vik and Val Bercovici discuss the evolution of storage solutions in the context of AI, focusing on Weka's innovative approaches to context memory, high bandwidth flash, and the importance of optimizing GPU usage. Val shares insights from his extensive experience in the storage industry, highlighting the challenges and advancements in memory requirements for AI models, the significance of latency, and the future of storage technologies. Takeaways Context memory is crucial for AI performance... | 54m 27s | ||||||
| 2/3/26 | OpenClaw Makes AI Agents and CPUs Get Real | Austin and Vik discuss the emerging trend of AI agents, particularly focusing on Claude Code and OpenClaw, and the resulting hardware implications. Key Takeaways: 2026 is expected to be a pivotal year for AI agents.The rise of agentic AI is moving beyond marketing to practical applications.Claude Code is being used for more than just coding; it aids in research and organization.Integrating AI with tools like Google Drive enhances productivity.Security concerns arise with giving AI agents acc... | 47m 34s | ||||||
| 1/28/26 | An Interview with Microsoft's Saurabh Dighe About Maia 200 | Maia 100 was a pre-GPT accelerator. Maia 200 is explicitly post-GPT for large multimodal inference. Saurabh Dighe says if Microsoft were chasing peak performance or trying to span training and inference, Maia would look very different. Higher TDPs. Different tradeoffs. Those paths were pruned early to optimize for one thing: inference price-performance. That focus drives the claim of ~30% better performance per dollar versus the latest hardware in Microsoft’s fleet. Intereting topics includ... | 52m 41s | ||||||
| 1/26/26 | Can Pre-GPT AI Accelerators Handle Long Context Workloads? | OpenAI's partnership with Cerebras and Nvidia's announcement of context memory storage raises a fundamental question: as agentic AI demands long sessions with massive context windows, can SRAM-based accelerators designed before the LLM era keep up—or will they converge with GPUs? Key Takeaways 1. Context is the new bottleneck. As agentic workloads demand long sessions with massive codebases, storing and retrieving KV cache efficiently becomes critical. 2. There's no one-size-fits-all. Sachin ... | 38m 02s | ||||||
| 1/22/26 | An Interview with Innoviz CEO Omer Keilaf about current LiDAR market dynamics | Innoviz CEO Omer Keilaf believes the LIDAR market is down to its final players—and that Innoviz has already won its seat. In this conversation, we cover the Level 4 gold rush sparked by Waymo, why stalled Level 3 programs are suddenly accelerating, the technical moat that separates L4-grade LIDAR from everything else, how a one-year-old startup won BMW, and why Keilaf thinks his competitors are already out of the race. Omer Keilaf founded Innoviz in 2016. Today it's a publicly traded Tier 1... | 46m 41s | ||||||
| 1/19/26 | LiDAR, Explained: How It Works and Why It Matters | Austin and Vik discuss why LiDAR is important for autonomy, how modern systems work, and how the technology has evolved. They compare Time of Flight and FMCW architectures, explain why wavelength choice matters, and walk through the tradeoffs between 905 nm and 1550 nm across eye safety, cost, and performance. The discussion closes with a clear-eyed look at competition, Chinese suppliers, and supply chain risk. Chapters (00:00) Introduction to LiDAR and why it matters (05:40) The case for LiD... | 35m 40s | ||||||
| 1/12/26 | Nvidia CES 2026 | Episode Summary Austin and Vik break down NVIDIA’s CES 2026 keynote, focusing on Vera Rubin, DGX Spark and DGX Station, uneducated investor panic, and physical AI. Key Takeaways DGX Spark brings server-class NVIDIA architecture to the desktop at low power, aimed at developers, enthusiasts, and enterprises experimenting locally. DGX Station functions more like a mini-AI rack on-prem: Grace Blackwell for inference and development without full racks The historical parallel is mainfram... | 47m 16s | ||||||
| 1/8/26 | Insights from IEDM 2025 | Austin and Vik discuss key insights from the IEDM conference. They explore the significance of IEDM for engineers and investors, the networking opportunities it offers, and the latest innovations in silicon photonics, complementary FETs, NAND flash memory, and GaN-on-silicon chiplets. Takeaways Penta-level NAND flash memory could disrupt the SSD marketGaN-on-Silicon chiplets enhance power efficiencyComplementary FETsOptical scale-up has a power problemThe future of transistors is ... | 42m 17s | ||||||
Showing 25 of 26
Sponsor Intelligence
Sign in to see which brands sponsor this podcast, their ad offers, and promo codes.
Chart Positions
7 placements across 7 markets.
Chart Positions
7 placements across 7 markets.




















