
Insights from recent episode analysis
Audience Interest
Podcast Focus
Publishing Consistency
Platform Reach
Insights are generated by CastFox AI using publicly available data, episode content, and proprietary models.
Total monthly reach
Estimated from 1 chart position in 1 market.
By chart position
- 🇳🇴NO · Technology#743K to 10K
- Per-Episode Audience
Est. listeners per new episode within ~30 days
900 to 3K🎙 Daily cadence·204 episodes·Last published 1w ago - Monthly Reach
Unique listeners across all episodes (30 days)
3K to 10K🇳🇴100% - Active Followers
Loyal subscribers who consistently listen
1.6K to 5.5K
Market Insights
Platform Distribution
Reach across major podcast platforms, updated hourly
Total Followers
—
Total Plays
—
Total Reviews
—
* Data sourced directly from platform APIs and aggregated hourly across all major podcast directories.
On the show
Recent episodes
AI Agents, Friction, and the Future of Developer Experience
May 5, 2026
Unknown duration
The Evolution of Microservices: Agents, Monoliths, and the Patterns That Never Die
Apr 29, 2026
Unknown duration
How Can AI Agents Cut Support Resolution Time by 95%?
Apr 22, 2026
Unknown duration
Spec-Driven Development and the AI Unified Process — with Simon Martinelli
Apr 14, 2026
Unknown duration
Neurosymbolic AI: Combining GenAI with Mathematical Proof — with Danilo Poccia
Apr 8, 2026
Unknown duration
Social Links & Contact
Official channels & resources
Official Website
Login
RSS Feed
Login
| Date | Episode | Description | Length | ||||||
|---|---|---|---|---|---|---|---|---|---|
| 5/5/26 | ![]() AI Agents, Friction, and the Future of Developer Experience | AI agents are transforming how we write, test, and ship software — but are they actually improving the developer experience? Recorded live at AWS Summit London, Romain is joined by Tomasz Ptak — AWS AI Hero and Senior Engineer at Duco — for a candid conversation about developer experience friction in the age of AI agents. We explore what happens when teams adopt AI coding assistants without thinking about the developer workflow holistically — from context overload and broken feedback loops to the hidden costs of AI-generated code that nobody reviewed. The conversation draws on Werner Vogels' 'Renaissance Developer' keynote from re:Invent 2025, where he argued that developers need to be broader thinkers, not just faster coders. Tomasz shares his perspective on what great developer experience looks like when AI agents are part of the picture, how the AWS AI League is helping developers build real agent skills through gamified competition, and why critical thinking about AI adoption matters more than blind acceleration. We also discuss psychological safety in engineering teams — drawing on Brené Brown's work on vulnerability — and why the best developer tools are the ones you barely notice, as Don Norman taught us decades ago. Whether you are building AI agents, designing internal developer platforms, or evaluating how AI tools fit into your team's workflow, this conversation offers a grounded, human-centered perspective on reducing friction and improving developer experience in 2026 and beyond. | — | ||||||
| 4/29/26 | ![]() The Evolution of Microservices: Agents, Monoliths, and the Patterns That Never Die | Recorded live at AWS Summit London, Matheus Guimaraes — Senior Developer Advocate at AWS and microservices specialist with over 25 years in tech — joins Romain to explore how agentic AI is reshaping the way we think about distributed systems architecture. From Martin Fowler's 2014 definition to agentic microservices in 2026, Matheus unpacks why the same distributed systems patterns — single responsibility, context dilution, failure modes — keep resurfacing in every new wave of architecture. The conversation covers the monolith vs. microservices debate as a deliberate architectural choice rather than accidental spaghetti, modular monoliths with Spring Modulith, and how AI coding assistants like Kiro are changing the architect's role from writing boilerplate to making higher-order design decisions. Matheus introduces his concepts of 'smart APIs,' 'monolithic agentic microservices,' and 'specialized agentic microservices' — and explains his talk 'Is It Agent?' on when to reach for agents vs. traditional applications. We dig into the serverless primitives purpose-built for agentic workloads: Amazon Bedrock AgentCore Runtime for long-running agent processes, AWS Lambda Durable Functions for multi-step workflows, and the AWS DevOps Agent for autonomous incident response. We also explore integration patterns with MCP and Google's A2A protocol, the 'lost in the middle' problem with context dilution, and why critical thinking about AI adoption matters more than ever. Whether you are decomposing a monolith or designing your first agentic system, this conversation connects the dots between a decade of microservices wisdom and the agentic future. | — | ||||||
| 4/22/26 | ![]() How Can AI Agents Cut Support Resolution Time by 95%? | CyberArk's support team was drowning in logs. With 40+ products across SaaS and self-hosted environments, each generating logs in different formats, support engineers were spending days just preparing data before they could even start investigating a customer issue. Complex cases took up to 15 days to resolve. Moshiko Ben Abu, a Software Engineer at CyberArk — now part of Palo Alto Networks — built an AI-powered system that changed all of that. In this episode, he walks us through the full architecture: replacing manual regex parsers with AI-generated grok patterns using Amazon Bedrock and Claude, storing structured data in Apache Iceberg tables via PyIceberg with automatic schema evolution, and querying everything through Athena — all while keeping PII masked and data encrypted in S3. But the real breakthrough came with agents. Moshiko describes how he moved from single-product Bedrock agents to a swarm of specialized AI agents built with the Strands framework, where agents investigating product A can autonomously call agents for product B and C to trace root causes across the entire stack. Cases that took 15 days now resolve in hours. Simple cases drop from 4-6 hours to 15-30 minutes. Engineers handle 4x more cases per day. We also dig into the security layer — Cedar policies and Amazon Verified Permissions for agent authorization, the identity integration with AgentCore, and what's coming next: S3 Tables, AgentCore in production, and cross-platform agent collaboration with Palo Alto. Moshiko's advice for developers getting started? Learn IAM first, then compute, then databases — and write everything in CDK. | — | ||||||
| 4/14/26 | ![]() Spec-Driven Development and the AI Unified Process — with Simon Martinelli | Simon Martinelli is a Java Champion, Vaadin Champion, and Oracle ACE Pro with over three decades of experience building enterprise software. In this episode, he introduces the AI Unified Process (AIUP) — a methodology he created that combines the rigor of the Rational Unified Process with modern AI-assisted development, and makes a compelling case for why specifications, not code, should be the source of truth. We explore the difference between system use cases and user stories, and why use cases — with their actors, preconditions, main flows, alternative flows, and business rules — give AI agents far better structure to generate working code. Simon walks through the four phases of AIUP: Inception, Elaboration, Construction, and Transition, showing how specs, code, and tests evolve together iteratively while staying in sync. On the architecture side, Simon advocates for Self-Contained Systems over microservices — vertical slices that include UI, backend, and database together, reducing cognitive load for both developers and AI agents. His tech stack of choice is Vaadin for full-stack Java UI, jOOQ for type-safe explicit SQL, and Spring Boot as the application framework — a combination he argues is uniquely well-suited for AI-driven development because it keeps everything in one language with no hidden behavior. We also dig into testing strategies with Karibu Testing for browserless Vaadin tests and Playwright for end-to-end coverage, how teams of two working on bounded contexts with trunk-based development are shipping faster than ever, and why the era of AI is bringing back the Renaissance developer — the generalist who understands the full stack from business requirements to production deployment. | — | ||||||
| 4/8/26 | ![]() Neurosymbolic AI: Combining GenAI with Mathematical Proof — with Danilo Poccia | What if you could combine the creative power of generative AI with the mathematical certainty of formal verification? In this episode, Danilo Poccia — Principal Developer Advocate at AWS — breaks down automated reasoning, a field of AI that has been quietly powering critical AWS services for years and is now becoming essential for production AI systems. We explore why generative AI alone is not enough for high-stakes applications, and how automated reasoning provides mathematical proof — not probabilistic guesses — that your AI agents are following the rules. Danilo traces the roots of automated reasoning back to the 'symbolist' branch of AI, explains how AWS has used it internally for years to verify S3 bucket policies, encryption algorithms, and network configurations, and shows how it now converges with neural networks in what researchers call neurosymbolic AI. On the practical side, we dig into Amazon Bedrock Guardrails with Automated Reasoning checks — the first and only generative AI safeguard that uses formal logic to verify response accuracy. Danilo walks through how developers can use policy verification for agentic systems and tool access control with Cedar, and how AgentCore Gateway fits into the picture for managing MCP-based tool interactions at scale. We also cover the open source landscape: Dafny for verification-aware programming, Lean as a theorem prover, Prolog for logic programming, and the growing ecosystem of MCP servers that bring these capabilities into everyday development workflows. Whether you are building AI agents for production or just curious about what comes after prompt engineering, this conversation will change how you think about AI reliability. | — | ||||||
| 4/1/26 | ![]() Agent-Native Serverless Development with Shridhar Pandey | In this episode, we sit down with Shridhar Pandey, Principal Product Manager on AWS Serverless Compute, to explore how the serverless team is pioneering agent-native development. Shridhar walks us through a remarkable March 2026 where the team shipped three major capabilities in just three weeks — a Kiro Power for Durable Functions, a Kiro Power for SAM, and a serverless agent plugin now available in Claude Code and Cursor. We trace the journey from 18 months of traditional developer experience improvements — local testing, remote debugging, LocalStack integration — to the realization that AI agents are fundamentally changing how developers build, deploy, and operate serverless applications. The serverless MCP server, now approaching half a million downloads, laid the foundation, and the new agent plugin builds on it with four specialized skills covering Lambda functions, operational best practices, infrastructure as code with SAM and CDK, and durable functions. Shridhar shares his thinking on agent personas — developer agents, operator agents, and platform owner agents — and how the team is applying an 'AX' (agent experience) lens to every feature they ship. We also take a candid detour into how AI has transformed his own work as a product leader: research that took weeks now takes hours, document cycles that spanned days now wrap up in a single sitting, and a fleet of agents handles daily digests and data analysis for the team. Open source runs through everything — the MCP server, the plugin, the public Lambda roadmap on GitHub — and Shridhar invites the community to shape what comes next. | — | ||||||
| 3/25/26 | ![]() The Hard Lessons of Cloud Migration: inDrive's Path from Monolith to Microservices | Join us for a fascinating conversation with Alexander 'Sasha' Lisachenko (Software Architect) and Artem Gab (Senior Engineering Manager) from inDrive, one of the global leaders in mobility operating in 48 countries and processing over 8 million rides per day. Sasha and Artem take us through their four-year transformation journey from a monolithic bare-metal setup in a single data center to a fully cloud-native microservices architecture on AWS. They share the hard-earned lessons from their migration, including critical challenges with Redis cluster architecture, the discovery of single-threaded CPU bottlenecks, and how they solved hot key problems using Uber's H3 hexagon-based geospatial indexing. We dive deep into their migration from Redis to Valkey on ElastiCache, achieving 15-20% cost optimization and improved memory efficiency, and their innovative approach to auto-scaling ElastiCache clusters across multiple dimensions. Along the way, they reveal how TLS termination on master nodes created unexpected bottlenecks, how connection storms can cascade when Redis slows down, and why engine CPU utilization is the one metric you should never ignore. This is a story of resilience, technical problem-solving, and the reality of large-scale cloud transformations — complete with rollbacks, late-night incidents, and the eventual triumph of a fully elastic, geo-distributed platform serving riders and drivers across the globe. | — | ||||||
| 3/18/26 | ![]() Spring AI and AgentCore: Building Enterprise AI Agents in Java | It's a milestone — episode 200! And to mark the occasion, we're doing something we've never done before: hosting two guests at the same time. James Ward (Principal Developer Advocate at AWS) and Josh Long (Spring Developer Advocate at Broadcom, Java Champion, and host of 'A Bootiful Podcast') join Romain for a wide-ranging conversation about why Java and Spring AI are becoming the go-to stack for enterprise AI development. We kick off with Spring AI's rapid evolution — from its 1.0 GA release to the just-released 2.0.0-M3 milestone — and why it's far more than an LLM wrapper. James and Josh break down how Spring AI provides clean abstractions across 20+ models and vector stores, with type-safe, compile-time validation that prevents the kind of string-typo failures that plague dynamically typed AI code in production. The numbers back it up: an Azul study found that 62% of surveyed companies are building AI solutions on Java and the JVM. James and Josh explain why — enterprise teams need security, observability, and scalability baked in, not bolted on. We dive into the Agent Skills open standard from Anthropic and James's SkillsJars project for packaging and distributing agent skills via Maven Central. We also cover Spring AI's official Java MCP SDK (now at 1.0) and how MCP and Agent Skills complement each other for building capable, composable agents. The performance story is striking: Java MCP SDK benchmarks show 0.835ms latency versus Python's 26.45ms, 1.5M+ requests per second versus 280K, and 28% CPU utilization versus 94% — with even better numbers using GraalVM native images. Josh and James also walk us through Embabel, the new JVM-based agentic framework from Spring creator Rod Johnson, featuring goal-oriented and utility-based planners with type-safe workflow definitions built on Spring AI foundations. We close with a look at running Spring AI agents on AWS Bedrock AgentCore — memory, browser support, code interpreter, and serverless containers for agentic workloads. | — | ||||||
| 3/11/26 | ![]() AWS Hero Linda Mohamed: Juggling Cloud, Community & Agentic AI | Some guests make you want to close your laptop and go build something. Linda Mohamed is one of them. In this episode, Romain sits down with Linda — AWS Community Hero, User Group Leader, Chairwoman of the AWS Community DACH Association, and independent cloud consultant based in Vienna. Linda started as a Java developer in on-premises enterprise environments. Her first AWS touch point? Building an Alexa skill for a smart home product — discovering Lambda almost by accident, and never looking back. Today she's building multi-agent AI systems, running an AI-powered video pipeline with five media customers, and doing it all while being one of the most energetic and generous contributors in the AWS community. Discover Linda's journey from Java developer in telecom to cloud and AI consultant, conference-driven development as a forcing function to ship, building Otto — a multi-agent Slack bot using Crew AI, LoRA fine-tuning, and Amazon Bedrock Agent Core Runtime. Learn about the AI-powered video analysis pipeline she built to solve her own problem and ended up selling to five media customers, vibe coding vs spec-driven development and when each makes sense, and why Clean Code principles still apply when designing agent architectures. | — | ||||||
| 3/4/26 | ![]() Evolving Lambda: from ephemeral compute to durable execution | In this episode, Romain sits down with Michael Gasch, Product Manager at AWS for Lambda Durable Functions, to explore one of the most exciting launches in the Serverless space in recent years. Michael shares the full story: from the early days of Lambda and the evolution of the serverless developer experience, to the challenges developers face when building multi-step, stateful workflows — and how Durable Functions addresses them natively within Lambda. Discover the evolution of AWS Serverless and why last year was 'the year of Lambda', key launches including IDE integrations, Lambda Managed Instances, and Lambda Tenant Isolation. Learn what Lambda Durable Functions are and what they are not, the checkpoint-replay model and how it enables resilient, long-running executions, and wait patterns including simple wait, wait for callback, and wait for condition. Explore real-world use cases: distributed transactions, LLM inference orchestration, ECS task coordination, and human-in-the-loop workflows. Michael shares unexpected feedback from customers about architectural simplification, how coding agents like Kiro dramatically accelerate writing Durable Functions, and when to choose Durable Functions vs. Step Functions vs. SQS/SNS. Plus, what's coming next: more regions, and the Java SDK (now available!). | — | ||||||
Want analysis for the episodes below?Free for Pro Submit a request, we'll have your selected episodes analyzed within an hour. Free, at no cost to you, for Pro users. | |||||||||
| 2/25/26 | ![]() Your AI Agent Can't Multitask — Here's How to Fix It | Mike Chambers is back — calling in from the other side of the globe — and he brought a lot to unpack. We pick up threads from our first conversation and follow them into genuinely exciting (and occasionally mind-bending) territory. We start with OpenClaw, the open-source agentic framework that took the developer world by storm. Mike shares his take on why it happened now — not just what it is — and why the timing was almost inevitable given how developers had been quietly experimenting with local agents for the past year. Then we go deep on asynchronous tool calling — a project Mike has been working on since mid-2024 that finally works reliably, thanks to more capable models. The idea: let your agent kick off a long-running task, keep the conversation going naturally, and have the result arrive without interrupting the flow. Mike walks through how he built this on top of Strands Agents SDK and why he's planning to propose it as a contribution to the open-source project. We also explore Strands Labs and its freshly released AI Functions — a genuinely new way to think about embedding generative capability directly into application code. Is this Software 3.1? Mike makes the case, and Romain pushes back in the best way. The episode closes with a look ahead: agent trust, observability with OpenTelemetry, and a thought experiment about what software might look like in five years if the execution environment itself becomes a model. | — | ||||||
| 2/18/26 | ![]() Chris Miller on AI Coding, Multi-Agent Systems, and the Silicon Valley Vibe | Join us for an engaging conversation with Chris Miller, an AWS Hero since 2021 and AI Software Engineer at Workato. Chris shares his journey from accidentally winning a DeepRacer competition to becoming a community leader in the San Francisco Bay Area. We dive deep into the realities of AI-assisted development, exploring multi-agent architectures, the Road to re:Invent hackathon experience, and what it's really like to be building in Silicon Valley's AI boom. Discover how Chris moved from DeepRacer champion to AWS Hero and community leader, his experience building a multi-agent imposter architecture featuring Jeff Barr, Swami, and Werner Vogels for the Road to re:Invent Hackathon, and the reality of moving beyond 'vibe coding' to responsible AI development. Learn about multi-agent orchestration patterns, token management, recursion limits, and the current state of AI development in San Francisco. Chris shares insights on developer tools like Kiro, the Strands framework, autonomous agents, and best practices for code review, testing, and transparency in AI-generated code. Whether you're exploring AI-assisted development, building multi-agent systems, or curious about the Silicon Valley AI scene, this conversation offers practical insights from the trenches. | — | ||||||
| 2/11/26 | ![]() From MCP to Multi-Agents: The Evolution of Agentic AI (and What's Next) | Mike Chambers reflects on 2025 as 'the year of agents' - though not quite in the way he predicted. From MCP's rocky launch to the rise of AI coding assistants, Mike shares hard-won lessons about what actually worked in production, the security challenges developers face, and why the future might be about giving agents access to filesystems and command lines rather than endless tool definitions. Discover how MCP evolved from standard IO to becoming the plugin ecosystem for IDEs, the security concerns around giving agents local machine access, and context overloading challenges. Mike walks through the framework evolution from heavy prompt engineering to model-centric approaches, why he abandoned his own framework for Strands Agents, and the rise of lightweight frameworks like ADK, Strands, and Spring AI. Learn about the real agent success story of 2025: AI coding assistants like Kiro, and Claude Code expanding beyond just code. Mike shares insights on agent skills for progressive disclosure, giving agents filesystem and command line access, long-running multi-agent systems, and moving from laptop productivity to production-scale agents. | — | ||||||
| 2/4/26 | ![]() Spec-Driven Development in Practice: A AWS Hero Journey | Christian, AWS Hero and Solution Architect at Bundesliga, shares his journey and hard-won lessons from adopting spec-driven development with AI coding assistants at enterprise scale. Learn when to use specs vs vibe coding, how to build effective steering documents, and practical strategies for helping engineering teams transition from traditional development to AI-assisted workflows. Discover the difference between spec-driven and vibe coding approaches, when to use each, and how to build effective steering documents that guide AI assistants. Christian shares enterprise adoption strategies that actually work, including the show-and-tell approach to reduce AI adoption fear, treating AI as a peer teammate, and creating centers of excellence for sharing learnings. We explore custom agents and the single responsibility principle, context engineering over prompt engineering, and dive into exciting re:Invent announcements like Lambda Durable Functions. Whether you're leading engineering teams, exploring AI-assisted development, or looking to optimize your development workflow, this conversation offers practical insights from real-world enterprise implementation. | — | ||||||
| 1/29/26 | ![]() Native Speed, Modern Safety: Swift for Backend Development | Join us as we explore Swift beyond iOS with Sebastien Stormacq, AWS Developer Advocate and Swift specialist. Discover why Swift is becoming a compelling choice for server-side development, offering native compilation, memory safety without garbage collection, and modern concurrency features that deliver exceptional performance and cost efficiency. Seb shares how Apple processes billions of daily requests using Swift on AWS infrastructure, achieving 40% better performance and 30% lower costs when migrating services from Java. We dive into the technical advantages that make Swift competitive with traditional backend languages, explore the vibrant server-side ecosystem with frameworks like Vapor and Hummingbird, and discuss practical implementations including serverless architectures on AWS Lambda. Whether you're a Swift developer curious about server-side possibilities, a full-stack developer looking to unify your tech stack, or a backend engineer evaluating language options, this conversation offers practical insights into Swift's capabilities beyond the client. | — | ||||||
| 11/28/25 | ![]() Local Unit Testing for Step Functions | Join us as we dive into the new local unit testing capabilities for AWS Step Functions with Jas Narula, Product Manager from the Step Functions team. We explore how developers can now test their workflows locally using the enhanced Test State API, moving beyond the limitations of the discontinued Step Functions Local container. Jas walks us through the new mocking capabilities, support for advanced states like Map and Parallel, and how this API-based approach gives you the same production runtime for testing. We also discuss the partnership with LocalStack for offline testing, the developer experience with popular testing frameworks like PyTest and Jest, and why this new approach makes Step Functions development more like traditional test-driven development. Whether you're orchestrating Lambda functions, calling Bedrock APIs, or building complex business workflows, this episode shows you how to test with confidence before deploying to the cloud. | — | ||||||
| 11/21/25 | ![]() Building AWS Builder Center: Architecture Lessons from a Large-Scale Community Platform | In this episode, we dive deep into AWS Builder Center, the new community platform designed to consolidate all AWS developer resources into one central hub. Roopal Jain, Software Development Engineer on the Builder Center team, explains how this platform brings together previously scattered AWS community properties like re:Post, Skill Builder, and community.aws into a unified experience for builders. Beyond exploring what Builder Center offers - from articles and events to toolboxes organized by programming language - we take a technical deep dive into how the team built this large-scale web application. Rupal shares the architectural decisions behind their serverless microservices approach, the challenges of integrating Neptune graph database for social features like user following, and creative solutions for handling dual authentication methods in API Gateway. The conversation reveals real-world implementation challenges that many developers face, from VPC networking complexities to service-to-service authentication patterns. We also discuss Builder ID, AWS's new individual identity system, and get a glimpse of what's coming next for the platform. | — | ||||||
| 11/14/25 | ![]() Amazon ECS Managed Instances for containerized applications | In this episode, we dive deep into Amazon ECS Managed Instances, a new compute option that bridges the gap between EC2 and Fargate for container deployments. Our guest Olly Pomeroy, AWS Container Specialist, explains how this new offering provides the flexibility of EC2 with the managed experience of Fargate. Learn about the architecture behind ECS Managed Instances, its pricing model, and how it handles instance lifecycle management automatically. Discover how AWS manages the underlying operating system using Bottlerocket OS, providing enhanced security through a read-only file system. Whether you're running GPU workloads, need specific instance types, or want to optimize costs, this episode covers everything you need to know about this new deployment option for containerized applications. | — | ||||||
| 11/7/25 | ![]() How to not worry about networking on AWS? | In this follow-up episode of the AWS Developers Podcast, we continue the conversation with Alex Huides, Principal Network Specialist Solutions Architect at AWS, focusing on Amazon VPC Lattice. We explore how developers can simplify networking concerns while maintaining robust connectivity between applications. Alex explains how VPC Lattice introduces a new boundary concept called service networks, which allows applications to communicate across accounts and VPCs regardless of IP overlap issues. The discussion covers how VPC Lattice abstracts away complex networking details, replacing traditional load balancers while providing secure, private connectivity between services. This episode demonstrates how AWS is removing undifferentiated heavy lifting in networking, making it easier for developers to focus on building applications. | — | ||||||
| 10/31/25 | ![]() Why developers should care about cloud networking | In this episode of the AWS Developers Podcast, we dive deep into the world of networking from a developer's perspective. Join host Sebastien Stormacq and guest Alex Huides, Principal Network Specialist Solutions Architect at AWS, as they explore why developers should care about networking in the cloud. They discuss the evolution of networking roles from traditional IT to cloud environments, explain fundamental AWS networking concepts, and examine various connectivity options like VPC Peering, Transit Gateway, and PrivateLink. The conversation highlights the challenges of managing network connectivity at scale in multi-account and multi-region architectures, while setting the stage for a deeper discussion about Amazon VPC Lattice in next week's episode. | — | ||||||
| 10/24/25 | ![]() AgentCore Identity | In this episode of the AWS Developers Podcast, we dive deep into Amazon Bedrock Agent Core Identity with Abram Douglas. Learn how this new service helps developers manage identities and authentication flows for AI agents at scale. Discover the seven core components of Agent Core and understand how the identity service simplifies complex OAuth2 flows and token management. Whether you're building AI agents that need to interact with third-party services like Google Calendar or Slack, this episode explains how Agent Core Identity removes the undifferentiated heavy lifting of identity management, token vaulting, and secure credential handling. Perfect for developers looking to deploy production-ready AI agents with enterprise-grade security. | — | ||||||
| 10/17/25 | ![]() Building AI Agents with the Strands SDK | In this episode of the AWS Developers Podcast, we dive deep into Strands Agents, AWS's open-source framework for building AI agents. Our guest Arron Bailiss, Principal Engineer and Tech Lead for Strands, explains how this framework evolved from an internal AWS tool to a developer-friendly, open-source solution. Learn how Strands simplifies AI agent development with just a few lines of code while maintaining production-ready capabilities. Aaron discusses the framework's unique model-driven approach, its support for both MCP and A2A protocols, and how it powers various AWS services including Amazon Q Developer and AWS Glue. Discover how Strands enables multi-agent systems through swarms, supports various deployment options, and get insights into the roadmap including TypeScript support and voice agent capabilities. | — | ||||||
| 10/10/25 | ![]() Scaling E-commerce with Serverless: The moonpig.com Story | In this episode, we dive deep into Moonpig's migration journey from an on-premise ASP.NET monolithic application to a fully serverless architecture on AWS. Richard Pearson, Head of Engineering, and Alexis Lowe, Principal Engineer at Moonpig, share their experience transforming a 25-year-old e-commerce platform. They discuss how they tackled the challenges of migrating from SQL Server to DynamoDB, implemented multi-region deployment, and achieved seamless scalability for their peak trading periods. Learn about their "no VPC" policy, their approach to observability, and how they organized their teams to embrace DevOps culture. This episode is particularly relevant for organizations considering a similar journey to serverless architecture or looking to scale their platforms globally. | — | ||||||
| 10/3/25 | ![]() Deploying MCP servers on Lambda | Update Oct 25.: After we recorded this episode (July 10th 2025), AWS launched Amazon Bedrock AgentCore (that went in preview on July 16th, 2025) and generally available since Oct. 13rd, 2025. AgentCore is the recommanded solution to deploy your MCP agents on AWS. We keep this episode available as a learning experience but deploying MCP on Lambda is not the recommanded architecture for your production workloads. In this episode, we dive deep into MCP (Model Context Protocol) servers on AWS Lambda. We explore what MCP is, how it enables AI systems to interact with tools through standardized protocols, and practical implementations on AWS Lambda. The discussion covers authentication mechanisms, deployment strategies, and the future potential of MCP servers as a marketplace for AI capabilities. Whether you're building AI-powered applications or interested in exposing your business capabilities to AI systems, this episode provides valuable insights into the technical aspects and business opportunities of MCP servers. | — | ||||||
| 9/26/25 | ![]() When AI meets biology: Using LLM to find natural alternatives to antibiotics | In this episode, we explore how Phagos, a French biotech startup, combines biology, data science, and cloud computing to combat antimicrobial resistance. Their innovative approach uses bacteriophages - natural predators of bacteria - as an alternative to antibiotics. We discuss how they leverage AWS services, including SageMaker and batch processing, to analyze genomic data and train specialized language models that can predict phage-bacteria interactions. Our guests explain how they process terabytes of genetic data, train and deploy AI models, and create user-friendly interfaces for their lab scientists. This fascinating conversation reveals how cloud computing and artificial intelligence are revolutionizing biotechnology and potentially helping solve one of this century's biggest health challenges. | — | ||||||
Showing 25 of 207
Sponsor Intelligence
Sign in to see which brands sponsor this podcast, their ad offers, and promo codes.
Chart Positions
1 placement across 1 market.
Chart Positions
1 placement across 1 market.

























