
Insights from recent episode analysis
Audience Interest
Podcast Focus
Publishing Consistency
Platform Reach
Insights are generated by CastFox AI using publicly available data, episode content, and proprietary models.
Est. Listeners
Based on iTunes & Spotify (publisher stats).
- Per-Episode Audience
Est. listeners per new episode within ~30 days
1 - 1,000 - Monthly Reach
Unique listeners across all episodes (30 days)
1 - 5,000 - Active Followers
Loyal subscribers who consistently listen
1 - 500
Market Insights
Platform Distribution
Reach across major podcast platforms, updated hourly
Total Followers
—
Total Plays
—
Total Reviews
—
* Data sourced directly from platform APIs and aggregated hourly across all major podcast directories.
On the show
Recent episodes
Beyond the CPU vs GPU War: Rethinking AI Compute at the System Level
Apr 28, 2026
49m 11s
Inside the AI Bottleneck: Data Movement, Chiplets, and System Scaling
Mar 27, 2026
54m 15s
From Arduino to AI Infrastructure: Scaling the Next Wave of Computingl
Jan 21, 2026
43m 20s
The Architecture of "Open" Intelligence
Oct 14, 2025
44m 26s
AI from Edge to Cloud: Hype vs Reality
Aug 14, 2025
47m 53s
Social Links & Contact
Official channels & resources
Official Website
Login
RSS Feed
Login
| Date | Episode | Description | Length | |
|---|---|---|---|---|
| 4/28/26 | Beyond the CPU vs GPU War: Rethinking AI Compute at the System Level | In this episode of Tech Threads, Nandan Nayampally, Baya Systems CCO, sits down with Ian Ferguson, Vice President of Vertical Markets and Business Development at SiFive, to unpack one of the most important shifts happening in modern computing: AI is no longer just about scaling compute, it’s about orchestrating complexity.As architectures fragment across accelerators, chiplets, and custom silicon, the real challenge is no longer building faster chips. it’s turning all of these elements into a cohesive, high-performance system.This conversation explores why the industry is moving beyond the traditional “CPU vs GPU” narrative and toward a system-level approach where performance is defined by how effectively compute, memory, interconnect and software work together.From the growing momentum behind RISC-V to the rise of heterogeneous compute environments, the discussion highlights a clear trend: the future won’t be defined by a single dominant architecture, but by optimized combinations of technologies tailored to specific workloads.That shift introduces a new layer of complexity.Key themes explored in this episode include:- Why data movement is emerging as the primary constraint in AI systems- How efficiency metrics like “tokens per dollar” are reshaping design priorities- The shift toward purpose-built architectures across data center, automotive, and edge applications- The role of open ecosystems and interoperability in accelerating innovation- Why competitive advantage is shifting from individual components to full system designIf you’re interested in where AI is headed, this is a must-watch conversation on the forces shaping the future of compute and what it takes to stay ahead. | 49m 11s | |
| 3/27/26 | Inside the AI Bottleneck: Data Movement, Chiplets, and System Scaling | For the last decade AI has been driven by one thing, more compute: bigger models, more accelerators, higher throughput.But as NVIDIA’s Jensen Huang recently highlighted at GTC, the industry is hitting a different kind of wall, one that hasn’t received nearly as much attention.The real constraint is no longer just compute. It’s data movement.To its credit, Nvidia has pushed this frontier with innovations like NVLink Fusion, and continued investment in connectivity AI dataflow architectures. But the challenge is bigger than any one company.As AI systems scale to hundreds - and even thousands - of processors, performance is increasingly defined by the ability to efficiently move, synchronize, and manage data across increasingly distributed architectures that can orchestrate data across chiplets, nodes, and entire racks.In this episode of Tech Threads, we bring together a panel of deeply experienced technologists, architects and leaders from companies like Intel, Arm, Altera, Texas Instruments, and Arteris - individuals who have helped shape modern compute, interconnect standards, and system architecture.Together they explore what is really changing beneath the surface: why traditional scaling approaches are breaking down, how coherent interconnects and network-on-chip architectures are evolving, and why system-level thinking is becoming essential.They also dive into the growing complexity introduced by chiplet-based designs, heterogeneous compute, and distributed memory systems and what it takes to maintain performance, efficiency, and programmability at scale.This is not just a technology shift, it’s an architectural reset.If you’re building or thinking about next-generation AI systems, this conversation gets to the heart of what matters next. | 54m 15s | |
| 1/21/26 | From Arduino to AI Infrastructure: Scaling the Next Wave of Computingl | What do Arduino, IoT, edge AI, and Nvidia-era data centers have in common? They all depend on ecosystems: people, platforms, and momentum.In this episode, Sander Arts joins Baya’s Chief Commercial Officer and Tech Threads host Nandan Nayampally for a wide-ranging, candid conversation on how breakthrough technology actually scales.Sander brings a rare operator’s perspective shaped by 25+ years scaling global technology companies across semiconductors, enterprise software, and AI. As the founder of Orange Tulip Consultancy, he serves as a Fractional CMO and growth advisor, helping leadership teams turn deep technology into real-world adoption.Together, Nandan and Sander explore how communities, developer access, and platform ecosystems turn deep technology into real-world adoption, and why timing and openness can be just as critical as technical performance.The discussion moves from the maker-era lessons of Arduino and IoT to today’s AI infrastructure boom, unpacking why scaling “long-tail” customers is both an opportunity and an operational challenge, and how edge AI and data center markets are evolving in parallel. They also debate the art of “opening the kimono,” how standardization and middleware shape adoption, and why capital intensity and speed often determine whether innovation stays local or becomes global.They close by looking ahead at emerging trends like robotics, neo-cloud architectures, quantum with real customers, and the networking backbone powering AI’s future, and how these shifts intersect with Baya’s view of increasingly complex, software-driven systems. | 43m 20s | |
| 10/14/25 | The Architecture of "Open" Intelligence | In this episode of TechThreads: Weaving the Intelligent Future, legendary chip architect Jim Keller joins Nandan Nayampally, Baya Systems’ Chief Commercial Officer, to explore how openness, modularity, and simplicity are redefining the architecture of intelligence.From his early work on Apple’s A4 through A7 processors to today’s AI-driven computing revolution, Jim shares how every leap in performance has come from breaking complexity down into composable, modular layers. Referencing The Systems Bible, he explains why “you can’t fix broken complicated systems”, and why the only path forward is to design simpler components that can scale and evolve together.The conversation spans:- The AI paradigm shift. Why traditional compute models no longer scale.- How data movement, not just compute, has become the new frontier.- The rise of chiplets and software-driven fabrics for scalable design.- The power of open ecosystems like RISC-V and OCA to democratize AI innovation.- Building a path toward sovereign and collaborative compute platforms worldwide.Listen as Jim Keller unpacks the engineering philosophy behind building open, intelligent systems and what it means for the future of AI and computing at scale. | 44m 26s | |
| 8/14/25 | AI from Edge to Cloud: Hype vs Reality | In this episode of Tech Threads, Nandan Nayampally sits down with Sally Ward-Foxton (EE Times) and Dr. Ian Cutress (More Than Moore) for an unfiltered look at the state of AI, from the far edge to hyperscale data centers.Ahead of the recording, we asked our LinkedIn followers to weigh in on some of the biggest questions in AI today, from bottlenecks in system design to the future of GPUs. Those poll results are revealed and discussed in the episode, bringing your insights directly into the conversation.The discussion covers where the real bottlenecks lie in AI system design, whether “AI at the edge” is living up to the hype, and if GPUs will continue to dominate or give way to new architectures. With insights on hardware-software co-design, open vs proprietary ecosystems, and the realities of scaling AI infrastructure, this episode blends deep technical perspective with candid industry observations.If you care about AI performance, power efficiency, and what’s next in compute architecture, this is a discussion you won’t want to miss. | 47m 53s | |
| 7/15/25 | Edge AI Revolution: Scaling Intelligence from the Network Edge to the Data Center | In this episode of Tech Threads: Weaving the Intelligent Future, Baya Systems' CCO Nandan Nayampally welcomes Fabrizio Del Maffeo, founder and CEO of Axelera AI, one of Europe’s most promising AI semiconductor startups. The conversation opens with a sharp look at the growing shift from cloud to edge AI, exploring the power, cost, latency constraints, and more importantly, the regional and use-case considerations that are reshaping how and where intelligence is deployed.The discussion covers strategies for deploying AI at the network edge, adapting to rapidly evolving workloads, and leveraging digital in-memory computing to enable low-power, high-throughput inference acceleration. It also delves into the future of chiplet-based design, the role of open and programmable hardware, and broader efforts to democratize compute. With shared perspectives on “scale within” and scalable system architectures, this episode offers a compelling view into the future of distributed AI. | 39m 56s | |
| 6/13/25 | Beyond the Bottlenecks: A Vision for Intelligent Systems | In this episode of Tech Threads: Weaving the Intelligent Future, host Nandan Nayampally welcomes Rochan Sankar, AI infrastructure pioneer and founder of Enfabrica, for a deep dive into the next frontier of intelligent computing. Together, they explore one of the most critical and often overlooked challenges in AI: data movement, and its impact at every level from cloud to end device. The discussion explores new system architecture, key’s to scalability, optical interconnects, chiplet innovation and impacts, and a whole lot more. From startup lessons to bold predictions, this conversation delivers candid insights and forward-looking perspectives on what it will take to build truly scalable AI systems.Whether you're an engineer, architect, or simply curious about the technologies shaping tomorrow’s computing landscape, this episode delivers both substance and inspiration. Listen in to discover what’s redefining performance at the infrastructure layer—and what’s coming next. | 44m 12s | |
| 5/14/25 | Scaling AI: Simplicity Meets Compute | In this riveting episode of Baya Systems' Tech Threads podcast, tech luminaries Raja Koduri, founder and CEO of Mihiri AI and Dr. Sailesh Kumar, founder and CEO of Baya Systems unpack the explosive growth of the intelligent compute era where AI demands unprecedented scalability. From Koduri’s trailblazing accelerated computing work at Apple, Intel, and AMD to Kumar’s innovations in software-defined networking, they reveal how today’s supply-constrained systems are evolving through chiplet technology and simplified architectures to meet the tripling annual needs of AI models.Koduri stresses simplicity in design, advocating for software abstractions and hardware that hide complexity to enable seamless scaling. Kumar details the need for configurable, software-defined fabrics to support heterogeneous compute workloads.From the elegance of “simple” scaling solutions to the critical role of software-hardware co-design, this episode is a masterclass in understanding the tech that powers our world—and what’s coming next. This episode is a must-listen for tech enthusiasts and professionals alike, offering a thrilling glimpse into the trillion-agent AI future and the scalable, boundary-pushing innovations shaping tomorrow’s world.Learn more about Baya Systems' software-defined fabrics solutions for scalable AI at bayasystems.com | 47m 29s | |
| 4/16/25 | AI, Open Platforms, and the Next Frontier of Innovation | In the second episode of the Tech Threads podcast, Nandan Nayampally, Chief Commercial Officer at Baya Systems, interviews Keith Witek, Chief Operating Officer at Tenstorrent. The discussion focuses on the new frontiers of AI and the technological challenges they present. Witek, a seasoned technology executive with a rich background at companies like Google, SiFive, Tesla, and AMD, shares insights into Tenstorrent’s recent $693 million fundraising and their mission to bring high-performance RISC-V cores to data centers and mobile phones. He emphasizes the importance of open platforms, drawing parallels to how Linux competes with Windows, and highlights Tenstorrent’s partnerships, such as with Baya Systems, to enhance system design efficiency and speed up market delivery.The conversation also delves into broader industry trends, including the consolidation of control points in the supply chain, the impact of trade wars, and the need for sovereign technology development to reduce dependency on monopolies.Witek and Nayampally discuss the transformative potential of AI, predicting disruptions in sectors like autonomous driving, healthcare, and entertainment, where AI could redefine business models and societal structures. They explore the role of chiplet technology in making hardware more agile and cost-effective, advocating for standardization to foster innovation.The episode concludes with a call to join the open platform movement to accelerate progress, overcome barriers like slowing Moore’s Law, and create a larger market pie for all stakeholders. | 40m 52s | |
| 2/19/25 | AI, Chiplets, and the Future of Semiconductors | In this inaugural episode of Tech Threads, hosted by Stan Reiss, we welcome Baya Systems’ new board members, Siva Yerramilli (Synopsys) and Manish Muthal (Maverick Silicon), for a deep dive into the rapid evolution of the semiconductor industry. As AI accelerates innovation, chiplet-based architectures are redefining how chips are designed and built, all while Moore’s Law slows down.Join us as we explore what’s driving this shift, the challenges ahead, and how the industry is adapting to meet the demands of next-generation computing. | 32m 37s |
Showing 10 of 10
Sponsor Intelligence
Sign in to see which brands sponsor this podcast, their ad offers, and promo codes.
Chart Positions
3 placements across 3 markets.
Chart Positions
3 placements across 3 markets.









