
Insights from recent episode analysis
Audience Interest
Podcast Focus
Publishing Consistency
Platform Reach
Insights are generated by CastFox AI using publicly available data, episode content, and proprietary models.
Most discussed topics
Brands & references
Total monthly reach
Estimated from 7 chart positions in 7 markets.
By chart position
- 🇨🇦CA · Technology#1875K to 30K
- 🇮🇳IN · Technology#1021K to 10K
- 🇮🇪IE · Technology#4710K to 30K
- 🇹🇷TR · Technology#893K to 10K
- 🇧🇪BE · Technology#963K to 10K
- Per-Episode Audience
Est. listeners per new episode within ~30 days
7.7K to 31K🎙 Daily cadence·345 episodes·Last published 1w ago - Monthly Reach
Unique listeners across all episodes (30 days)
26K to 103K🇨🇦29%🇮🇪29%🇮🇳10%+4 more - Active Followers
Loyal subscribers who consistently listen
14K to 57K
Market Insights
Platform Distribution
Reach across major podcast platforms, updated hourly
Total Followers
—
Total Plays
—
Total Reviews
—
* Data sourced directly from platform APIs and aggregated hourly across all major podcast directories.
On the show
From 10 epsHost
Recent guests
Recent episodes
How Claude Mythos Changes Vulnerability Management: From CVSS to Exploitability
May 5, 2026
44m 38s
AISPM Isn't Enough: How to Apply Zero Trust to AI Agents
Apr 29, 2026
54m 01s
The Rise of Agentic Cloud Security: Code-to-Cloud Shrinks to 3 Days
Apr 21, 2026
26m 53s
Why EDR Fails at AI Security & The Rise of Endpoint Behavior Modeling
Apr 14, 2026
31m 06s
Solving Prompt Injection & Shadow AI for AI Malware
Apr 7, 2026
36m 36s
Social Links & Contact
Official channels & resources
Official Website
Login
RSS Feed
Login
| Date | Episode | Topics | Guests | Brands | Places | Keywords | Sponsor | Length | |
|---|---|---|---|---|---|---|---|---|---|
| 5/5/26 | ![]() How Claude Mythos Changes Vulnerability Management: From CVSS to Exploitability✨ | vulnerability managementAI in security+4 | Brad Hibbert | Claude MythosBrinqa+1 | — | vulnerability managementClaude Mythos+6 | — | 44m 38s | |
| 4/29/26 | ![]() AISPM Isn't Enough: How to Apply Zero Trust to AI Agents✨ | Zero TrustAI Security+3 | Shawn Hays | Microsoft CopilotVaronis Atlas+1 | — | Zero TrustAI Security+3 | — | 54m 01s | |
| 4/21/26 | ![]() The Rise of Agentic Cloud Security: Code-to-Cloud Shrinks to 3 Days✨ | cloud securityAI adoption+5 | Elad Koren | Palo Alto NetworksCortex Cloud | — | cloud securityAI+5 | — | 26m 53s | |
| 4/14/26 | ![]() Why EDR Fails at AI Security & The Rise of Endpoint Behavior Modeling✨ | EDR failuresAI security+4 | Brandon Dixon | Ent AIMicrosoft+5 | — | EDRAI security+4 | — | 31m 06s | |
| 4/7/26 | ![]() Solving Prompt Injection & Shadow AI for AI Malware✨ | AI securityprompt injection+4 | Jasson Casey | Beyond IdentityAnthropic+2 | — | AI agentsmalware+7 | — | 36m 36s | |
| 3/10/26 | ![]() Browser Security Explained: Consent Phishing, "Click Fix" Attacks & The Limits of EDR✨ | browser securityphishing+3 | Adam Bateman | Push SecurityAzure+1 | — | browser-native exploitsClick Fix attacks+3 | — | 46m 07s | |
| 3/6/26 | ![]() Is AI Hallucinations a Myth and the Real Threat from AI✨ | AI-driven attackscybersecurity+3 | Edward Wu | DropzoneAICloudSecPod | — | AIcybersecurity+5 | — | 40m 02s | |
| 2/20/26 | ![]() Why AI Infrastructure is Harder to Secure Than Cloud✨ | AI securitycloud security+4 | Toni De La Fuente | ProwlerClaude Code+3 | — | AI infrastructurecloud security+6 | — | 34m 03s | |
| 2/10/26 | ![]() How Attackers Bypass AI Guardrails with Natural Language✨ | AI securitynatural language attacks+4 | Eduardo Garcia | Check PointCloud Security Podcast+1 | — | Generative AIsecurity controls+3 | — | 46m 36s | |
| 2/6/26 | ![]() Vulnerability Management vs. Exposure Management✨ | Vulnerability ManagementExposure Management+3 | Brad Hibbert | BrinqaCloudSecPod | — | vulnerability managementexposure management+5 | — | 39m 38s | |
Want analysis for the episodes below?Free for Pro Submit a request, we'll have your selected episodes analyzed within an hour. Free, at no cost to you, for Pro users. | |||||||||
| 2/5/26 | ![]() Is Developer Friendly AI Security Possible with MCP & Shadow AI | Is "developer-friendly" AI security actually possible? In this episode, Bryan Woolgar-O'Neil (CTO & Co-founder of Harmonic Security) joins Ashish to dismantle the traditional "block everything" approach to security.Bryan explains why 70% of Model Context Protocol (MCP) servers are running locally on developer laptops and why trying to block them is a losing battle . Instead, he advocates for a "coaching" approach, intervening in real-time to guide engineers rather than stopping their flow .We dive deep into the technical realities of MCP (Model Context Protocol), why it's becoming the standard for connecting AI to data, and the security risks of connecting it to production environments . Bryan also shares his prediction that Small Language Models (SLMs) will eventually outperform general giants like ChatGPT for specific business tasks .Guest Socials - Bryan's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Security, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(01:55) Who is Bryan Woolgar-O'Neil?(03:00) Why AI Adoption Stops at Experimentation(05:15) The "Shadow AI" Blind Spot: Firewall Stats vs. Reality (08:00) Is AI Security Fundamentally Different? (Speed & Scale) (10:45) Can Security Ever Be "Developer Friendly"? (14:30) What is MCP (Model Context Protocol)? (17:20) Why 70% of MCP Usage is Local (and the Risks) (21:30) The "Coaching" Approach: Don't Just Block, Educate (25:40) Developer First: Permissive vs. Blocking Cultures (30:20) The Rise of the "Head of AI" Role (34:30) Use Cases: Workforce Productivity vs. Product Integration (41:00) An AI Security Maturity Model (Visibility -> Access -> Coaching) (46:00) Future Prediction: Agentic Flows & Urgent Tasks (49:30) Why Small Language Models (SLMs) Will Win (53:30) Fun Questions: Feature Films & Pork Dumplings | — | ||||||
| 1/21/26 | ![]() Why AI Can't Replace Detection Engineers: Build vs. Buy & The Future of SOC | Is the AI SOC a reality, or just vendor hype? In this episode, Antoinette Stevens (Principal Security Engineer at Ramp) joins Ashish to dissect the true state of AI in detection engineering.Antoinette shares her experience building detection program from scratch, explaining why she doesn't trust AI to close alerts due to hallucinations and faulty logic . We explore the "engineering-led" approach to detection, moving beyond simple hunting to building rigorous testing suites for detection-as-code .We discuss the shrinking entry-level job market for security roles , why software engineering skills are becoming non-negotiable , and the critical importance of treating AI as a "force multiplier, not your brain".Guest Socials - Antoinette's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Security, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:25) Who is Antoinette Stevens?(04:10) What is an "Engineering-Led" Approach to Detection? (06:00) Moving from Hunting to Automated Testing Suites (09:30) Build vs. Buy: Is AI Making it Easier to Build Your Own Tools? (11:30) Using AI for Documentation & Playbook Updates (14:30) Why Software Engineers Still Need to Learn Detection Domain Knowledge (17:50) The Problem with AI SOC: Why ChatGPT Lies During Triage (23:30) Defining AI Concepts: Memory, Evals, and Inference (26:30) Multi-Agent Architectures: Using Specialized "Persona" Agents (28:40) Advice for Building a Detection Program in 2025 (Back to Basics) (33:00) Measuring Success: Noise Reduction vs. False Positive Rates (36:30) Building an Alerting Data Lake for Metrics (40:00) The Disappearing Entry-Level Security Job & Career Advice (44:20) Why Junior Roles are Becoming "Personality Hires" (48:20) Fun Questions: Wine Certification, Side Quests, and Georgian Food | — | ||||||
| 1/13/26 | ![]() AI Vulnerability Management: Why You Can't Patch a Neural Network | Traditional vulnerability management is simple: find the flaw, patch it, and verify the fix. But what happens when the "asset" is a neural network that has learned something ethically wrong? In this episode, Sapna Paul (Senior Manager at Dayforce) explains why there are no "Patch Tuesdays" for AI models .Sapna breaks down the three critical layers of AI vulnerability management: protecting production models, securing the data layer against poisoning, and monitoring model behavior for technically correct but ethically flawed outcomes . We discuss how to update your risk register to speak the language of business and the essential skills security professionals need to survive in an AI-first world .The conversation also covers practical ways to use AI within your security team to combat alert fatigue , the importance of explainability tools like SHAP and LIME , and how to align with frameworks like the NIST AI RMF and the EU AI Act .Guest Socials - Sapna's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Security, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:00) Who is Sapna Paul?(02:40) What is Vulnerability Management in the Age of AI? (05:00) Defining the New Asset: Neural Networks & Models (07:00) The 3 Layers of AI Vulnerability (Production, Data, Behavior) (10:20) Updating the Risk Register for AI Business Risks (13:30) Compliance vs. Innovation: Preventing AI from Going Rogue (18:20) Using AI to Solve Vulnerability Alert Fatigue (23:00) Skills Required for Future VM Professionals (25:40) Measuring AI Adoption in Security Teams (29:20) Key Frameworks: NIST AI RMF & EU AI Act (31:30) Tools for AI Security: Counterfit, SHAP, and LIME (33:30) Where to Start: Learning & Persona-Based Prompts (38:30) Fun Questions: Painting, Mentoring, and Vegan Ramen | — | ||||||
| 12/16/25 | ![]() Why Backups Aren't Enough & Identity Recovery is Key against Ransomware | Think your cloud backups will save you from a ransomware attack? Think again. In this episode, Matt Castriotta (Field CTO at Rubrik) explains why the traditional "I have backups" mindset is dangerous. He distinguishes between Disaster Recovery (business continuity for operational errors) and Cyber Resilience (recovering from a malicious attack where data and identity are untrusted) .Matt speaks about the "dirty secrets" of cloud-native recovery, explaining why S3 versioning and replication are not valid cyber recovery strategies . The conversation shifts to the critical, often overlooked aspect of Identity Recovery. If your Active Directory or Entra ID is compromised, it's "ground zero” and you can't access anything. Matt argues that identity must be treated as the new perimeter and backed up just like any other critical data source .We also explore the impact of AI agents on data integrity, how do you "rewind" an AI agent that hallucinated and corrupted your data? Plus, practical advice on DORA compliance, multi-cloud resiliency, and the "people and process" side of surviving a breach.Guest Socials - Matt's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions:(00:00) Introduction(02:20) Who is Matt Castriotta?(03:20) Defining Cyber Resilience: The Ability to Say "No" to Ransomware(05:00) Why "I Have Backups" is Not Enough(06:45) The Difference Between Disaster Recovery and Cyber Recovery(10:20) Cloud Native Risks: Versioning and Replication Are Not Backups(12:50) DORA Compliance: Multi-Cloud Resiliency & Egress Costs(15:10) The "Shared Responsibility Model" Trap in Cloud(17:45) Identity is the New Perimeter: Why You Must Back It Up(22:30) Identity Recovery: Can You Restore Your Active Directory in Minutes?(25:40) AI and Data: The New "Oil" and "Crown Jewels"(27:20) Rubrik Agent Cloud: Rewinding AI Agent Actions(29:40) Top 3 Priorities for a 2026 Resiliency Program(33:10) Fun Questions: Guitar, Family, and Italian Food | — | ||||||
| 12/9/25 | ![]() How to secure your AI Agents: A CISOs Journey | Transitioning a mature organization from an API-first model to an AI-first model is no small feat. In this episode, Yash Kosaraju, CISO of Sendbird, shares the story of how they pivoted from a traditional chat API platform to an AI agent platform and how security had to evolve to keep up.Yash spoke about the industry's obsession with "Zero Trust," arguing instead for a practical "Multi-Layer Trust" approach that assumes controls will fail . We dive deep into the specific architecture of securing AI agents, including the concept of a "Trust OS," dealing with new incident response definitions (is a wrong AI answer an incident?), and the critical need to secure the bridge between AI agents and customer environments .This episode is packed with actionable advice for AppSec engineers feeling overwhelmed by the speed of AI. Yash shares how his team embeds security engineers into sprint teams for real-time feedback, the importance of "AI CTFs" for security awareness, and why enabling employees with enterprise-grade AI tools is better than blocking them entirely .Questions asked:Guest Socials - Yash's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:20) Who is Yash Kosaraju? (CISO at Sendbird)(03:30) Sendbird's Pivot: From Chat API to AI Agent Platform(05:00) Balancing Speed and Security in an AI Transition(06:50) Embedding Security Engineers into AI Sprint Teams(08:20) Threats in the AI Agent World (Data & Vendor Risks)(10:50) Blind Spots: "It's Microsoft, so it must be secure"(12:00) Securing AI Agents vs. AI-Embedded Applications(13:15) The Risk of Agents Making Changes in Customer Environments(14:30) Multi-Layer Trust vs. Zero Trust (Marketing vs. Reality) (17:30) Practical Multi-Layer Security: Device, Browser, Identity, MFA(18:25) What is "Trust OS"? A Foundation for Responsible AI(20:45) Balancing Agent Security vs. Endpoint Security(24:15) AI Incident Response: When an AI Gives a Wrong Answer(29:20) Security for Platform Engineers: Enabling vs. Blocking(30:45) Providing Enterprise AI Tools (Gemini, ChatGPT, Cursor) to Employees(32:45) Building a "Security as Enabler" Culture(36:15) What Questions to Ask AI Vendors (Paying with Data?)(39:20) Personal Use of Corporate AI Accounts(43:30) Using AI to Learn AI (Gemini Conversations)(45:00) The Stress on AppSec Engineers: "I Don't Know What I'm Doing"(48:20) The AI CTF: Gamifying Security Training(50:10) Fun Questions: Outdoors, Team Building, and Indian/Korean Food | — | ||||||
| 12/4/25 | ![]() AI-First Vulnerability Management: Should CISOs Build or Buy? | Thinking of building your own AI security tool? In this episode, Santiago Castiñeira, CTO of Maze, breaks down the realities of the "Build vs. Buy" debate for AI-first vulnerability management.While building a prototype script is easy, scaling it into a maintainable, audit-proof system is a massive undertaking requiring specialized skills often missing in security teams. The "RAG drug" relies too heavily on Retrieval-Augmented Generation for precise technical data like version numbers, which often fails .The conversation gets into the architecture required for a true AI-first system, moving beyond simple chatbots to complex multi-agent workflows that can reason about context and risk . We also cover the critical importance of rigorous "evals" over "vibe checks" to ensure AI reliability, the hidden costs of LLM inference at scale, and why well-crafted agents might soon be indistinguishable from super-intelligence .Guest Socials - Santiago's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:00) Who is Santiago Castiñeira?(02:40) What is "AI-First" Vulnerability Management? (Rules vs. Reasoning)(04:55) The "Build vs. Buy" Debate: Can I Just Use ChatGPT?(07:30) The "Bus Factor" Risk of Internal Tools(08:30) Why MCP (Model Context Protocol) Struggles at Scale(10:15) The Architecture of an AI-First Security System(13:45) The Problem with "Vibe Checks": Why You Need Proper Evals(17:20) Where to Start if You Must Build Internally(19:00) The Hidden Need for Data & Software Engineers in Security Teams(21:50) Managing Prompt Drift and Consistency(27:30) The Challenge of Changing LLM Models (Claude vs. Gemini)(30:20) Rethinking Vulnerability Management Metrics in the AI Era(33:30) Surprises in AI Agent Behavior: "Let's Get Back on Topic"(35:30) The Hidden Cost of AI: Token Usage at Scale(37:15) Multi-Agent Governance: Preventing Rogue Agents(41:15) The Future: Semi-Autonomous Security Fleets(45:30) Why RAG Fails for Precise Technical Data (The "RAG Drug")(47:30) How to Evaluate AI Vendors: Is it AI-First or AI-Sprinkled?(50:20) Common Architectural Mistakes: Vibe Evals & Cost Ignorance(56:00) Unpopular Opinion: Well-Crafted Agents vs. Super Intelligence(58:15) Final Questions: Kids, Argentine Steak, and Closing | — | ||||||
| 12/2/25 | ![]() SIEM vs. Data Lake: Why We Ditched Traditional Logging? | In this episode, Cliff Crosland, CEO & co-founder of Scanner.dev, shares his candid journey of trying (and initially failing) to build an in-house security data lake to replace an expensive traditional SIEM.Cliff explains the economic breaking point where scaling a SIEM became "more expensive than the entire budget for the engineering team". He details the technical challenges of moving terabytes of logs to S3 and the painful realization that querying them with Amazon Athena was slow and costly for security use cases .This episode is a deep dive into the evolution of logging architecture, from SQL-based legacy tools to the modern "messy" data lake that embraces full-text search on unstructured data. We discuss the "data engineering lift" required to build your own, the promise (and limitations) of Amazon Security Lake, and how AI agents are starting to automate detection engineering and schema management.Guest Socials - Cliff's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:25) Who is Cliff Crosford?(03:00) Why Teams Are Switching from SIEMs to Data Lakes(06:00) The "Black Hole" of S3 Logs: Cliff's First Failed Data Lake(07:30) The Engineering Lift: Do You Need a Data Engineer to Build a Lake?(11:00) Why Amazon Athena Failed for Security Investigations(14:20) The Danger of Dropping Logs to Save Costs(17:00) Misconceptions About Building Your Own Data Lake(19:00) The Evolution of Logging: From SQL to Full-Text Search(21:30) Is Amazon Security Lake the Answer? (OCSF & Custom Logs)(24:40) The Nightmare of Log Normalization & Custom Schemas(28:00) Why Future Tools Must Embrace "Messy" Logs(29:55) How AI Agents Are Automating Detection Engineering(35:45) Using AI to Monitor Schema Changes at Scale(39:45) Build vs. Buy: Does Your Security Team Need Data Engineers?(43:15) Fun Questions: Physics Simulations & Pumpkin Pie | — | ||||||
| 11/18/25 | ![]() How to Build Trust in an AI SOC for Regulated Environments | How do you establish trust in an AI SOC, especially in a regulated environment? Grant Oviatt, Head of SOC at Prophet Security and a former SOC leader at Mandiant and Red Canary, tackles this head-on as a self-proclaimed "AI skeptic". Grant shared that after 15 years of being "scared to death" by high-false-positive AI, modern LLMs have changed the game .The key to trust lies in two pillars: explainability (is the decision reasonable?) and traceability (can you audit the entire data trail, including all 40-50 queries?) . Grant talks about yje critical architectural components for regulated industries, including single-tenancy , bring-your-own-cloud (BYOC) for data sovereignty , and model portability.In this episode we will be comparing AI SOC to traditional MDRs and talking about real-world "bake-off" results where an AI SOC had 99.3% agreement with a human team on 12,000 alerts but was 11x faster, with an average investigation time of just four minutes .Guest Socials - Grant's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security Podcast(00:00) Introduction(02:00) Who is Grant Oviatt?(02:30) How to Establish Trust in an AI SOC for Regulated Environments(03:45) Explainability vs. Traceability: The Two Pillars of Trust(06:00) The "Hard SOC Life": Pre-AI vs. AI SOC(09:00) From AI Skeptic to AI SOC Founder: What Changed? (10:50) The "Aha!" Moment: Breaking Problems into Bite-Sized Pieces(12:30) What Regulated Bodies Expect from an AI SOC(13:30) Data Management: The Key for Regulated Industries (PII/PHI) (14:40) Why Point-in-Time Queries are Safer than a SIEM (15:10) Bring-Your-Own-Cloud (BYOC) for Financial Services (16:20) Single-Tenant Architecture & No Training on Customer Data (17:40) Bring-Your-Own-Model: The Rise of Model Portability (19:20) AI SOC vs. MDR: Can it Replace Your Provider? (19:50) The 4-Minute Investigation: Speed & Custom Detections (21:20) The Reality of Building Your Own AI SOC (Build vs. Buy)(23:10) Managing Model Drift & Updates(24:30) Why Prophet Avoids MCPs: The Lack of Auditability (26:10) How Far Can AI SOC Go? (Analysis vs. Threat Hunting)(27:40) The Future: From "Human in the Loop" to "Manager in the Loop" (28:20) Do We Still Need a Human in the Loop? (95% Auto-Closed) (29:20) The Red Lines: What AI Shouldn't Automate (Yet) (30:20) The Problem with "Creative" AI Remediation(33:10) What AI SOC is Not Ready For (Risk Appetite)(35:00) Gaining Confidence: The 12,000 Alert Bake-Off (99.3% Agreement) (37:40) Fun Questions: Iron Mans, Texas BBQ & SeafoodThank you to Prophet Security for sponsoring this episode. | — | ||||||
| 11/11/25 | ![]() Threat Modeling the AI Agent: Architecture, Threats & Monitoring | Are we underestimating how the agentic world is impacting cybersecurity? We spoke to Mohan Kumar, who did production security at Box for a deep dive into the threats of true autonomous AI agents.The conversation moves beyond simple LLM applications (like chatbots) to the new world of dynamic, goal-driven agents that can take autonomous actions. Mohan took us through why this shift introduces a new class of threats we aren't prepared for, such as agents developing new, unmonitorable communication methods ("Jibber-link" mode).Mohan shared his top three security threats for AI agents in production:Memory Poisoning: How an agent's trusted memory (long-term, short-term, or entity memory) can be corrupted via indirect prompt injection, altering its core decisions.Tool Misuse: The risk of agents connecting to rogue tools or MCP servers, or having their legitimate tools (like a calendar) exploited for data exfiltration.Privilege Compromise: The critical need to enforce least-privilege on agents that can shift roles and identities, often through misconfiguration.Guest Socials - Mohan's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(01:30) Who is Mohan Kumar? (Production Security at Box)(03:30) LLM Application vs. AI Agent: What's the Difference?(06:50) "We are totally underestimating" AI agent threats(07:45) Software 3.0: When Prompts Become the New Software(08:20) The "Jibber-link" Threat: Agents Ditching Human Language(10:45) The Top 3 AI Agent Security Threats(11:10) Threat 1: Memory Poisoning & Context Manipulation(14:00) Threat 2: Tool Misuse (e.g., exploiting a calendar tool)(16:50) Threat 3: Privilege Compromise (Least Privilege for Agents)(18:20) How Do You Monitor & Audit Autonomous Agents?(20:30) The Need for "Observer" Agents(24:45) The 6 Components of an AI Agent Architecture(27:00) Threat Modeling: Using CSA's MAESTRO Framework(31:20) Are Leaks Only from Open Source Models or Closed (OpenAI, Claude) Too?(34:10) The "Grandma Trick": Any Model is Susceptible(38:15) Where is AI Agent Security Evolving? (Orchestration, Data, Interface)(42:00) Fun Questions: Hacking MCPs, Skydiving & Risk, BiryaniResources mentioned during the episode:Mohan’s Udemy Course -AI Security Bootcamp: LLM Hacking Basics Andre Karpathy's "Software 3.0" Concept "Jibber-link Mode" VideoCrewAI FrameworkOWASP Top 10 for LLM Applications Cloud Security Alliance (CSA) MAESTRO Framework | — | ||||||
| 11/4/25 | ![]() AI is already breaking the Silos Between AppSec & CloudSec | The silos between Application Security and Cloud Security are officially breaking down, and AI is the primary catalyst. In this episode, Tejas Dakve, Senior Manager, Application Security, Bloomberg Industry Group and Aditya Patel, VP of Cybersecurity Architecture discuss how the AI-driven landscape is forcing a fundamental change in how we secure our applications and infrastructure.The conversation explores why traditional security models and gates are "absolutely impossible" to maintain against the sheer speed and volume of AI-generated code . Learn why traditional threat modeling is no longer a one-time event, how the lines between AppSec and CloudSec are merging, and why the future of the industry belongs to "T-shaped engineers" with a multidisciplinary range of skills.Guest Socials - Tejas's Linkedin + Aditya's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:30) Who is Tejas Dakve? (AppSec)(03:40) Who is Aditya Patel? (CloudSec)(04:30) Common Use Cases for AI in Cloud & Applications(08:00) How AI Changed the Landscape for AppSec Teams(09:00) Why Traditional Security Models Don't Work for AI(11:00) AI is Breaking Down Security Silos (CloudSec & AppSec)(12:15) The "Hallucination" Problem: AI Knows Everything Until You're the Expert(12:45) The Speed & Volume of AI-Generated Code is the Real Challenge(14:30) How to Handle the AI Code Explosion? "Paved Roads"(15:45) From "Department of No" to "Department of Safe Yes"(16:30) Baking Security into the AI Lifecycle (Like DevSecOps)(18:25) Securing Agentic AI: Why IAM is More Important than the Chat(24:00) The Silo: AppSec Doesn't Have Visibility into Cloud IAM(25:00) Merging Threat Models: AppSec + CloudSec(26:20) Using New Frameworks: MITRE ATLAS & OWASP LLM Top 10(27:30) Threat Modeling Must Be a "Living & Breathing Process"(28:30) Using AI for Automated Threat Modeling(31:00) Building vs. Buying AI Security Tools(34:10) Prioritizing Vulnerabilities: Quality Over Quantity(37:20) The Rise of the "T-Shaped" Security Engineer(39:20) Building AI Governance with Cross-Functional Teams(40:10) Secure by Design for AI-Native Applications(44:10) AI Adoption Maturity: The 5 Stages of Grief(50:00) How the Security Role is Evolving with AI(55:20) Career Advice for Evolving in the Age of AI(01:00:00) Career Advice for Newcomers: Get an IT Help Desk Job(01:03:00) Fun Questions: Cats, Philanthropy, and Thai FoodResources discussed during the interview:Amazon Rufus: (Amazon's AI review summarizer) OWASP Top 10 for LLMsSTRIDE Threat Model: (Microsoft methodology) MITRE ATLASCloud Security Alliance (CSA) Maestro Framework CISA KEV (Known Exploited Vulnerabilities)Book: Range: Why Generalists Triumph in a Specialized World by David Epstein Anjali Charitable TrustAditya Patel's Blog | — | ||||||
| 10/28/25 | ![]() AI Agents for SOC: Hype Curve vs. Measurable ROI | Is the AI SOC analyst just hype, or is there measurable ROI? We spoke to Edward Wu, founder of Dropzone AI about this and he shared insights from a recent Cloud Security Alliance (CSA) benchmark report that quantified the impact of AI augmentation on SOC teams. The study revealed significant improvements in speed (45-60% faster investigations) and completeness, even for analysts using the tech for the first time.Edward spoke about the "robotic" limitations of traditional SOAR playbooks with the adaptive capabilities of agentic AI systems, which can autonomously investigate alerts end-to-end without pre-defined scripts . He shared that while AI won't entirely replace human analysts ("That's not going to happen"), it will automate much of the manual Tier 1 toil, freeing up humans for higher-value roles like security architecture, transformation, and detection engineering .Guest Socials - Edward's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter - Cloud Security BootCampIf you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:40) Who is Edward Wu?(03:30) The Evolution of AI Agents Since ChatGPT(04:35) Surprising Findings from the CSA AI SOC Benchmark Report(06:40) Why Has Traditional Security Automation (SOAR) Underdelivered?(09:30) How AI SOC Analysts Differ from SOAR Playbooks(11:30) Does Agentic AI Reduce the Need for Security Data Lakes?(13:20) The Evolving ROI for SOC in the AI Era(14:50) ROI Use Case 1: Reducing Alert Investigation Latency(15:15) ROI Use Case 2: Increasing Alert Coverage (Mediums & Lows)(16:20) ROI Use Case 3: Depth of Coverage & Skill Uniformity(18:15) Achieving Both Speed and Thoroughness with AI(19:40) How Far Can AI Go? Detection vs. Investigation vs. Response(21:35) AI SOC Hype vs. Reality: Receptiveness and Trust(24:20) The Future Role of Tier 1 SOC Analysts(27:40) What Scale Benefits Most from AI SOC Analysts? (Enterprise & MSPs)(29:00) The Build vs. Buy Dilemma for AI SOC Technology ($20M R&D Reality)(33:10) Training Budgets: What Skills Should Future SOC Teams Learn?Resources spoken about during the episode:Beyond the Hype: AI Agents in the SOC Benchmark StudyRequest a Demo here | — | ||||||
| 10/21/25 | ![]() Can You Build an AI SOC with Claude Code? The Reality vs. Hype | Can you just use Claude Code or another LLM to "vibe code" your way into building an AI SOC? In this episode, Ariful Huq, Co-Founder and Head of Product at Exaforce spoke about the reality being far more complex than the hype suggests. He explains why a simple "bolt-on" approach to AI in the SOC is insufficient if you're looking for real security outcomes.We speak about foundational elements required to build a true AI SOC, starting with the data. It's "well more than just logs and event data," requiring the integration of config, code, and business context to remove guesswork and provide LLMs with the necessary information to function accurately . The discussion covers the evolution beyond traditional SIEM capabilities, the challenges of data lake architectures for real-time security processing, and the critical need for domain-specific knowledge to build effective detections, especially for SaaS platforms like GitHub that lack native threat detection .This is for SOC leaders and CISOs feeling the pressure to integrate AI. Learn what it really takes to build an AI SOC, the unspoken complexities, and how the role of the security professional is evolving towards the "full-stack security engineer".Guest Socials - Ariful's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter - Cloud Security BootCampIf you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:30) Who is Ariful Huq?(03:40) Can You Just Use Claude Code to Build an AI SOC?(06:50) Why a "Bolt-On" AI Approach is Tough for SOCs(08:15) The Importance of Data: Beyond Logs to Config, Code & Context(09:10) Building AI Native Capabilities for Every SOC Task (Detection, Triage, Investigation, Response)(12:40) The Impact of Cloud & SaaS Data Volume on Traditional SIEMs(14:15) Building AI Capabilities on AWS Bedrock: Best Practices & Challenges(17:20) Why SIEM Might Not Be Good Enough Anymore(19:10) The Critical Role of Diverse Data (Config, Code, Context) for AI Accuracy(22:15) Data Lake Challenges (e.g., Snowflake) for Real-Time Security Processing(26:50) Detection Coverage Blind Spots, Especially for SaaS (e.g., GitHub)(31:40) Building Trust & Transparency in AI SOCs(35:40) Rethinking the SOC Team Structure: The Rise of the Full-Stack Security Engineer(42:15) Final Questions: Running, Family, and Turkish Food | — | ||||||
| 10/10/25 | ![]() Incident Response of Kubernetes and how to Automate Containment | How do you perform incident response on a Kubernetes cluster when you're not even on the same network? In this episode, Damien Burks, Senior Security engineer breaks down the immense challenges of container security and why most commercial tools are failing at automated response.While many CNAPPs provide runtime detection, they lack a "sophisticated approach to automating incident response or containment" in complex environments like private EKS . He shares his hands-on experience building a platform that uses a dynamically deployed Lambda function to achieve containment of a compromised EKS node in just 10 minutes, a process that would otherwise take hours of manual work and approvals .This is a guide for any DevSecOps or cloud security professional tasked with securing containerized workloads. The conversation also covers a layered prevention strategy, the evolving role of the cloud security engineer, and career advice for those looking to enter the field.Guest Socials - Damien's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter - Cloud Security BootCampIf you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:15) Who is Damien Burks?(03:20) The State of Cloud Incident Response in 2025(05:15) Why There is No Sophisticated, Automated IR for Kubernetes(06:20) A Deep Dive into Kubernetes Incident Response(07:30) The Unique Challenge of a Private EKS Cluster(12:15) A Layered Approach to Prevention in a DevSecOps Culture(17:00) How to Automate Containment in a Private EKS Cluster(17:40) From Hours to 10 Minutes: The Impact of Automation(22:00) The Evolving & Complex Role of the Cloud Security Engineer(25:40) Do We Have Too Much Visibility or Not Enough?(29:00) Career Path: The Value of Learning to Code for DevSecOps(35:00) Damien's Hot Take: "Multi-Cloud Just Means Chaos"(44:20) Career Advice for Traditional IR Professionals Moving to Cloud(47:50) Final Questions: Video Games, Life's Journey, and GumboResources spoke about during the interviewDamien's Website | — | ||||||
| 10/3/25 | ![]() The Truth About AI in the SOC: From Alert Fatigue to Detection Engineering | "The next five years are gonna be wild." That's the verdict from Forrester Principal Analyst Allie Mellen on the state of Security Operations. This episode dives into the "massive reset" that is transforming the SOC, driven by the rise of generative AI and a revolution in data management.Allie explains why the traditional L1, L2, L3 SOC model, long considered a "rite of passage" that leads to burnout is being replaced by a more agile and effective Detection Engineering structure. As a self-proclaimed "AI skeptic," she cuts through the marketing hype to reveal what's real and what's not, arguing that while we are "not really at the point of agentic" AI, the real value lies in specialized triage and investigation agents.Guest Socials - Allie's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter - Cloud Security BootCampIf you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:35) Who is Allie Mellen?(03:15) What is Security Operations in 2025? The SIEM & XDR Shakeup(06:20) The Rise of Security Data Lakes & Data Pipeline Tools(09:20) A "Great Reset" is Coming for the SOC(10:30) Why the L1/L2/L3 Model is a Burnout Machine(13:25) The Future is Detection Engineering: An "Infinite Loop of Improvement"(17:10) Using AI Hallucinations as a Feature for New Detections(18:30) AI in the SOC: Separating Hype from Reality(22:30) What is "Agentic AI" (and Are We There Yet?)(26:20) "No One Knows How to Secure AI": The Detection & Response Challenge(28:10) The Critical Role of Observability Data for AI Security(31:30) Are SOC Teams Actually Using AI Today?(34:30) How to Build a SOC Team in the AI Era: Uplift & Upskill(39:20) The 3 Things to Look for When Buying Security AI Tools(41:40) Final Questions: Reading, Cooking, and SushiResources:You can read Allie's blogs here | — | ||||||
| 9/23/25 | ![]() The Security Gaps in AWS Bedrock & Azure AI You Need to Know | The race to deploy AI is on, but are the cloud platforms we rely on secure by default? This episode features a practical, in-the-weeds discussion with Kyler Middleton, Principal Developer, Internal AI Solutions, Veradigm and Sai Gunaranjan, Lead Architect, Veradigm as they compare the security realities of building AI applications on the two largest cloud providers.The conversation uncovers critical security gaps you need to be aware of. Sai reveals that Azure AI defaults to sending customer data globally for processing to keep costs low, a major compliance risk that must be manually disabled . Kyler breaks down the challenges with AWS Bedrock, including the lack of resource-level security policies and a consolidated logging system that mixes all AI conversations into one place, making incident response incredibly difficult .This is an essential guide for any cloud security or platform engineer moving into the AI space. Learn about the real-world architectural patterns, the insecure defaults to watch out for, and the new skills required to transition from a Cloud Security Engineer to an AI Security Engineer.Guest Socials - Kyler's Linkedin + Sai's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter - Cloud Security BootCampIf you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:30) Who are Kyler Middleton & Sai Gunaranjan?(03:40) Common AI Use Cases: Chatbots & Product Integration(05:15) Beyond IAM: The Full Scope of AI Security in the Cloud(07:30) The Role of the Cloud in Deploying Secure AI(13:10) AWS AI Architecture: Bedrock, Knowledge Bases & Vector Databases(15:10) Azure AI Architecture: AI Services, ML Workspaces & Foundry(21:00) The "Delete the Frontend" Problem: The Risk of Agentic AI(23:25) A Security Deep Dive into Microsoft Azure AI Services(29:20) Azure's Insecure Default: Sending Your Data Globally(31:35) A Security Deep Dive into AWS Bedrock(32:30) The Critical Gap: No Resource Policies in AWS Bedrock(33:20) AWS Bedrock's Logging Problem: A Nightmare for Incident Response(36:15) AWS vs. Azure: Which is More Secure for AI Today?(39:20) A Maturity Model for Adopting AI Security in the Cloud(44:15) From Cloud Security to AI Security Engineer: What's the Skill Gap?(48:45) Final Questions: Toddlers, Kickball, Barbecue & Ice Cream | — | ||||||
Showing 25 of 178
Sponsor Intelligence
Sign in to see which brands sponsor this podcast, their ad offers, and promo codes.
Chart Positions
7 placements across 7 markets.
Chart Positions
7 placements across 7 markets.
