
Insights from recent episode analysis
Audience Interest
Podcast Focus
Publishing Consistency
Platform Reach
Insights are generated by CastFox AI using publicly available data, episode content, and proprietary models.
Est. Listeners
Based on iTunes & Spotify (publisher stats).
- Per-Episode Audience
Est. listeners per new episode within ~30 days
10,001 - 25,000 - Monthly Reach
Unique listeners across all episodes (30 days)
25,001 - 75,000 - Active Followers
Loyal subscribers who consistently listen
5,001 - 15,000
Market Insights
Platform Distribution
Reach across major podcast platforms, updated hourly
Total Followers
—
Total Plays
—
Total Reviews
—
* Data sourced directly from platform APIs and aggregated hourly across all major podcast directories.
On the show
Recent episodes
When AI Agents Change their Intent w/ Frank Vukovits
Apr 29, 2026
Unknown duration
OWASP Top 10, Vibe Coding, and What Developers Miss w/ Tanya Janca
Apr 22, 2026
Unknown duration
The Future of Hacking is Agentic w/ Jason Haddix
Apr 15, 2026
Unknown duration
Open Source Malware, Supply Chain Risk, and Contagious Interviews: w/ Paul McCarty and Jenn Gile
Apr 7, 2026
Unknown duration
Bugcrowd Founder Casey Ellis: AI Slop, and the Future of Hacking
Apr 2, 2026
Unknown duration
Social Links & Contact
Official channels & resources
Official Website
Login
RSS Feed
Login
| Date | Episode | Description | Length | ||||||
|---|---|---|---|---|---|---|---|---|---|
| 4/29/26 | When AI Agents Change their Intent w/ Frank Vukovits | AI agents are transforming cybersecurity, from how access is granted to how attacks unfold. Frank Vukovitz (Delinea) joins Secure Disclosure to unpack the rise of non-human identities, the risks of autonomous agents, and why concepts like least privilege, identity lifecycle management, and continuous monitoring are more critical than ever. The big question: will AI ultimately make us more secure, or less? | — | ||||||
| 4/22/26 | OWASP Top 10, Vibe Coding, and What Developers Miss w/ Tanya Janca | Tanya Janca joins the podcast for a sharp, no-nonsense conversation on the OWASP Top 10, why secure coding still gets skipped, and how AI is reshaping the way developers build and review software. She breaks down why broken access control keeps topping the charts, what security teams keep getting wrong, and how to create guardrails developers will actually use. The episode also dives into vibe coding, supply chain risk, and the future of secure software training. It’s fast, practical, and packed with opinions worth stealing. | — | ||||||
| 4/15/26 | The Future of Hacking is Agentic w/ Jason Haddix | Jason Haddix joins the podcast to break down how AI is transforming offensive security — from attacking LLM-powered applications to why he believes 90% of pentests will soon be done by AI. We dive into prompt injection, defending AI systems with layered controls, and how enterprises are (sometimes dangerously) adopting AI internally.We also explore the impact of AI on bug bounty programs, why “fighting AI with AI” is becoming necessary, and what the future holds for human pentesters in an increasingly automated world. | — | ||||||
| 4/7/26 | Open Source Malware, Supply Chain Risk, and Contagious Interviews: w/ Paul McCarty and Jenn Gile | In this episode of The Secure Disclosure, Jenn Gile and Paul McCarty from Open Source Malware break down how malicious packages are evolving, why developers are now a primary target, and what security teams still get wrong about software supply chain defense. From contagious interview campaigns to registry weaknesses and response playbooks, this conversation covers the real world risks behind today’s open source malware problem.Sponsored by Aikido Securityhttps://aikido.devLearn more about Open Source Malwarehttps://opensourcemalware.com/Connect with Jenn Gilehttps://www.linkedin.com/in/jenngile/Connect with Paul McCartyhttps://www.linkedin.com/in/mccartypaul/Follow The Secure Disclosure on LinkedInhttps://www.linkedin.com/company/the-secure-disclosure | — | ||||||
| 4/2/26 | Bugcrowd Founder Casey Ellis: AI Slop, and the Future of Hacking | Casey Ellis, founder of Bugcrowd, joins the show to talk about the evolution of bug bounty, how hackers went from outsiders to strategic assets, and why AI-generated bug reports are putting pressure on security teams. We also get into VDPs vs public bounties, pentesting, vulnerability economics, and where security research is headed over the next five years. | — | ||||||
| 3/25/26 | Are Humans the Weakest Link in Security? w/ Sean Juroviesky | In this episode of the Secure Disclosure Podcast, we dive into the human side of security with Sean Juroviesky. From why people remain the biggest challenge in cybersecurity to how organizations can build effective security cultures, this conversation explores identity, access management, and the risks introduced by shadow IT and AI. We unpack how to make the secure path the easiest path, how to detect risky behavior without alienating employees, and why over-permissioned AI tools may be the next big threat. It’s a practical, honest discussion on balancing security, usability, and the rapid evolution of AI in modern organizations.SponsorThis episode is brought to you by Aikido — https://aikido.devSecure everything from code to cloud | — | ||||||
| 3/17/26 | AI Agents Must Have Identity & Access Control w/ Johannes Keienburg | AI agents are here, and they’re already transforming how we work. But beneath the hype lies a massive, unsolved security problem.In this episode, Mackenzie Jackson sits down with Johannes Keienburg to unpack the reality of autonomous agents: why they’re so powerful, why they’re so dangerous, and why access control is about to become the biggest challenge in cybersecurity.From broken authorization to “agents without brakes,” they explore how today’s systems are fundamentally unprepared—and what needs to change before things go seriously wrong. | — | ||||||
| 3/16/26 | The Creator of Curl on Why AI Is Breaking Bug Bounties w/ Daniel Stenberg | Daniel Stenberg, creator of curl, explains how a small open source tool became core internet infrastructure. The conversation covers curl’s origin, maintainer pressure, AI-generated bug bounty spam, the future of vulnerability reporting, and how AI is changing software engineering and security. | — | ||||||
| 3/9/26 | LLMs Will Never Be Fully Secure w/ Brooks McMillin | We’re back in the “wild west” — only this time, the apps can be social engineered at machine speed. Live from CactusCon, Brooks McMillin breaks down malicious MCP servers, why we’re repeating the same security mistakes (hello again, broken access control), and why prompt injection probably isn’t going away. We get practical on what to lock down, how to roll out AI tooling safely, and why “AI lipstick” doesn’t change the underlying enterprise risk game. | — | ||||||
| 2/26/26 | Leaking or Spying? The Truth About Browser Extensions | In this week’s news brief, Mackenzie explores a comprehensive new report investigating data leakage and potential surveillance behavior in popular browser extensions. The researchers examined how extensions collect and transmit data, conducted behavioral payload analysis, and deployed honey URLs to detect suspicious activity.The episode highlights a critical distinction. Some extensions may unintentionally leak data, while others appear purpose built to collect and transmit it. From creative exfiltration techniques to the broader implications for data loss prevention, this is a fascinating look at how modern browser extensions can quietly put user data at risk and how researchers uncovered it. | — | ||||||
Want analysis for the episodes below?Free for Pro Submit a request, we'll have your selected episodes analyzed within an hour. Free, at no cost to you, for Pro users. | |||||||||
| 2/25/26 | Is AI Changing Cybersecurity, Or Just Exposing It? w/ Lester Godsey | Recorded live at Cactus Con, ASU CISO Lester Godsey joins Secure Disclosure to unpack what’s truly new in AI security, and what’s just old problems getting fresh attention. From prompt injection and agentic AI to data classification and privacy, this episode explores how enterprise leaders should think about AI risk in a world where banning it simply isn’t an option. | — | ||||||
| 2/19/26 | Will AI Replace Pen Testers? w/ Paul Petefish | AI is taking over the boring stuff — recon, noise, and tier-one work — but when it comes to real-world pentesting, business logic flaws, weird edge cases, and creative thinking still belong to humans.In this episode, Paul Petefish (Evolve Security) and Mackenzie dig into what AI is actually changing in offensive security, why prompt injection is getting weirder, and how “man + machine” is quickly becoming the new normal.#CyberSecurity #Pentesting #AI #AppSec #LLMSecurity #PromptInjection #InfoSec | — | ||||||
| 2/12/26 | AI Slop Is Killing Bug Bounties | AI is overwhelming bug bounty programs with convincing but useless reports — and some major projects are shutting theirs down entirely. In this week’s news brief, we break down the economics behind “AI slop,” why curl pulled the plug on its program, and what this means for ethical hackers. Then we revisit OpenClaw, where security researchers are shifting from criticism to collaboration — and even VirusTotal is stepping in. Is AI breaking security… or reshaping it? | — | ||||||
| 2/10/26 | Can AI Really Fix Security Bugs? Inside Modern Autofix Systems | Frederick Ryckbosch | AI is transforming application security, not just by finding vulnerabilities but by fixing them safely. In this episode, sit down with Frederick Ryckbosch and dive into how AI understands code flow, remediates real security issues, and builds trust through testing and feedback loops. A practical look at autofix, dependencies, and the future of secure software development.Read more about AI Autofix: https://www.aikido.dev/features/autofixVideo chapters00:00 Introduction to AI and Application Security02:00 Where AI Is Actually Useful in Security04:10 Understanding Code Flow and Real Vulnerabilities07:10 Can AI Safely Fix Security Issues?10:35 Building Trust With AI Autofix and Feedback Loops12:15 Autofixing Dependencies and Breaking Changes17:55 Trust, Risk, and Guardrails for AI Systems20:35 The Future of Coding With AI | — | ||||||
| 2/5/26 | OpenClaw & ClawHub Is a Malware Nightmare: Inside the AI Agent Supply Chain Crisis | OpenClaw is a powerful new open-source AI agent — and a massive security risk. In this episode, security researcher Paul McCarty joins the show to break down how ClawHub, OpenClaw’s skill registry, is already flooded with malware. We explore how 386 malicious skills were discovered, why AI agents are more dangerous than traditional package managers like npm, how attackers are gaming download stats, and why basic security controls are missing. Plus, updates on the Notepad++ supply chain attack, the Coinbase breach fallout, and a shocking case where penetration testers were prosecuted for doing their jobs.Follow Paul on Social Media - https://www.linkedin.com/in/mccartypaul/Reads Pauls Article on ClawHub - https://opensourcemalware.com/00:00:00 OpenClaw and the Rise of AI Agent Security Risks00:02:14 ClawHub Skills Explained and How Malware Spreads00:03:47 386 Malicious Skills and Real-World Attack Techniques00:06:16 Why OpenClaw Could Be More Dangerous Than npm00:12:16 Gaming Downloads and Making Malware Look Legit00:14:38 How to Secure AI Agent Ecosystems and Use Them Safely | — | ||||||
| 2/3/26 | The Security Risk Hiding in AI w/ Matthias Feys | In this episode of Cyber and Saki, Mackenzie sits down with AI expert Matthias Feys from ML6 to chat about how artificial intelligence has gone from niche machine learning projects to the generative AI explosion we see everywhere today.They dig into what’s changed over the last decade, why tools like ChatGPT have been such a game changer, and what people still get wrong when they treat AI as a magic solution for everything.The conversation also covers the security side of AI, why you shouldn’t blindly trust these models, and where Matthias is most excited about AI making a real impact, especially in the “boring work” that can finally be automated.To wrap things up, Mackenzie throws in a hilarious round of “Would You Rather” questions, including vibe coding, AI hackers, and the future of super-intelligence.A thoughtful, funny, and practical look at where AI is headed, and how we can use it responsibly along the way. | — | ||||||
| 1/29/26 | News Brief: Inside the Honey Browser Extension Scandal with The Engineer Who Broke It Open | In this episode of Secure Disclosure, we go behind the scenes of the infamous Honey browser extension scandal with special guest J3lte, the engineer who uncovered the data that helped expose what was really happening.From affiliate link manipulation to massive user tracking across thousands of stores, J3lte breaks down how he reverse-engineered Honey, what he discovered, and why browser extensions can be far more dangerous than most people realize.Stay tuned for the untold technical story behind one of the biggest consumer security scandals online.Follow J3lte - https://x.com/j3lte Original Videos from MegaLag 1st Video https://www.youtube.com/watch?v=vc4yL3YTwWk2nd Video https://www.youtube.com/watch?v=wwB3FmbcC883rd Video https://www.youtube.com/watch?v=qCGT_CKGgFEOther videos covering the scandal (that are awesome) The PrimeTime - https://www.youtube.com/watch?v=_acTMUmdY9MMarques Brownlee - https://www.youtube.com/watch?v=EAx_RtMKPm8 News Links ClawdBot VS Extensions Malware https://www.aikido.dev/blog/fake-clawdbot-vscode-extension-malwareContagious Interview Link: https://opensourcemalware.com/blog/contagious-code-fake-fontChapters 00:00 – The Honey Scandal Returns02:11 – Users, Merchants, and Hidden Coupon Abuse03:36 – Meet J3lte: The Engineer Behind the Investigation05:07 – Discovering 180,000 Stores in Honey’s Data07:11 – Affiliate Links Without Coupons: No Value Provided09:49 – Why Browser Extensions Are So Hard to Trust13:54 – Malware Trend: The Fake Claudebot VS Code Extension15:57 - Contagious Interview Coverage 18:38 - SoundCloud Hack | — | ||||||
| 1/27/26 | AI is Rewriting Cybersecurity - Guardrails, regulation, and the point of no return w/ Joseph Carson | Social engineering and phishing are evolving fast, and AI is making attacks harder to spot and quicker to scale. Joseph Carson joins the show to break down the biggest risks for defenders, from deepfakes and perfect-language phishing to rapid data analysis and malware that adapts in real time. The conversation also explores guardrails, regulation, and what AI can and cannot do well, plus a quick round of security themed “Would you rather” questions.Links: Linkedin: https://www.linkedin.com/in/josephcarson/Sponsored Link: https://www.aikido.dev/Chapters00:00 Intro: AI makes phishing harder to detect00:00:28 Welcome and Joe’s background00:01:29 Biggest risks: deepfakes and phishing at scale00:03:03 AI speeds up analysis of stolen data00:04:25 Lower barrier to entry and faster attacker learning00:05:31 Malware and campaigns adapting in real time00:06:28 Why “bad grammar” is no longer a phishing tell00:08:16 Can AI be creative, or is it just probability00:12:56 Guardrails, regulation, and the EU vs US vs China approaches00:29:30 Would you rather: security tradeoffs and tool choices#podcast #thesecuredisclosure #cybersecurity | — | ||||||
| 12/16/25 | From GitHub Actions to Job Markets: The Real State of Cybersecurity | AI is creeping into every part of software development — including CI/CD pipelines — and attackers are already abusing it.In this episode of the Secure Disclosure Podcast, we break down:A brand-new vulnerability class called Prompt Pwn, where prompt injection inside GitHub Actions can leak secrets and compromise supply chainsA sophisticated malvertising campaign targeting developers via GitHub Pages and Docker HubAnd the reality behind the cybersecurity job market: is there a skills shortage, a hiring freeze, or both?Featuring security researcher Rein Daelman on AI-driven CI/CD risks, and recruiter Barry Prost on how AI is reshaping cybersecurity hiring, skills, and careers.If you care about AppSec, DevOps, supply chain security, or breaking into cybersecurity in 2025, this one’s for you.More information PromptPwn - https://www.aikido.dev/blog/promptpwnd-github-actions-ai-agents Guiest Linkedin - https://www.linkedin.com/in/rein-daelman/Rent a Recruiter - https://rentarecruiter.com/Guest LinkedIn Barry Prost - https://www.linkedin.com/in/barryprost/Sponsors Aikido Security - https://aikido.devChapters00:00 – Intro02:00 – AI prompt injection in CI/CD, GitHub Actions, Prompt Pwn12:09 – Sponsor Segment12:59 – Malvertising campaigns targeting devs16:39 – Cybersecurity job market with Barry Prost | — | ||||||
| 12/9/25 | Shai Hulud The Second Coming & Malware for Hire: The Secure Disclosure Podcast | In this episode of Secure Disclosure, we break down two major cyber-security incidents shaking the industry.First, researcher Charlie Eriksen joins us to reveal how the Shai Hulud “The Second Coming” worm compromised over 800 NPM packages and triggered 30,000+ secret-filled GitHub repos and why the worm can even wipe your machine when containment fails.Then, we sit down with Jérémy Sicon and Quentin Bourgue from sekoia.io to uncover a highly sophisticated phishing campaign abusing Booking.com accounts using PureRAT malware and a sprawling criminal ecosystem.Subscribe for weekly deep dives into the threats shaping our digital world.00:00 – Introduction01:03 – Shahalude: The Second Coming17:07 – Sponsored Segment (Aikido SafeChain)17:10 – Malware-for-Hire: Booking.com Phishing Operation | — | ||||||
| 11/18/25 | Attackers Targeting Code Editors and Critical Infrastructure with Vangelis Stykas & John Tuckner | In this episode of Secure Disclosure, Mackenzie Jackson digs into the surge of malicious VS Code extensions with researcher John Tuckner, founder of Secure Annex. We break down how attackers are shifting toward targeting developers themselves, explore real-world malicious extensions like Ransom Vibe and Sleepy Duck, and discuss why marketplaces like Open VSX are struggling to keep malware out.We also cover new research on secret leaks in top AI companies, and in our Leaders & Legends segment, we speak with Vangelis Stykas (CTO & co-founder of Kumio) about the growing vulnerabilities inside global energy infrastructure, OT security gaps, and the rise of AI-powered pentesting.If you want insights on software supply chain risk, AI security, and critical infrastructure threats—this episode is for you.Links:RansomVibe Technical Blog: https://secureannex.com/blog/ransomvibe/SleepyDuck Technical Blog: https://secureannex.com/blog/sleepyduck-malwareWiz Secrets Inside AI top 50 Research: https://www.wiz.io/blog/forbes-ai-50-leaking-secretsChapters 00:00 — Intro01:07 — Malicious VS Code Extensions (with John Tuckner)15:31 — Secrets Leaking in AI Repositories18:55 — Sponsor Segment19:55 — Leaders & Legends: Securing Critical Infrastructure | — | ||||||
| 11/11/25 | The Accidental Founder: From Open-Source to AI Startup | Geoffrey De Smet, creator of OptaPlanner and now Timefold.ai, shares how IBM’s acquisition of Red Hat forced him to turn his open-source project into a company. He explains why ChatGPT can’t solve real-world scheduling, what makes heuristic AI different, and how Timefold is saving companies millions of hours through smarter planning.Chapters00:00 – Introduction01:00 – Origins of OptaPlanner03:00 – The First Breakthrough05:00 – Red Hat & The Open Source Journey07:00 – IBM Acquires Red Hat10:00 – Becoming a Founder13:00 – Finding a Co-Founder15:00 – Why ChatGPT Can’t Do Scheduling17:00 – The Math Behind the Madness19:00 – How Timefold Solves Real Problems21:00 – AI Hype Cycles23:00 – Saving Hours and Dollars26:00 – “Would You Rather”29:00 – Closing | — | ||||||
| 11/4/25 | Secure Code and AI - Paul McCarty & Sooraj Shah on Securing AI Code | In this episode of The Secure Disclosure, host Mackenzie Jackson dives deep into the evolving intersection of AI, security, and development.First, Paul McCarty from Git Safety breaks down his recent discovery of a malicious npm package that impersonated the Claude CLI tool, hijacking developer workflows and acting as a man-in-the-middle for AI API calls. You can read Paul’s full breakdown here: “Malicious Claude Code Package Analysis” – https://www.getsafety.com/blog-posts/malicious-claude-code-packageNext, Sooraj Shah from Aikido Security joins to unpack findings from the State of AI in Security & Development 2026 Report, which surveyed 450 CISOs about how AI-generated code is reshaping security accountability, visibility, and optimism in the field. Check out the full report here: https://www.aikido.dev/state-of-ai-security-development-2026This episode explores real-world AI supply chain threats, systemic vulnerabilities in npm, and what organizations must do to stay ahead as AI reshapes modern development.Follow the guests:Follow Mackenzie: https://www.linkedin.com/in/advocatemack/Follow Paul: https://www.linkedin.com/in/mccartypaul/Follow Sooraj: https://www.linkedin.com/in/soorajshah/Chapters00:00 Introduction01:19 Paul McCarty on the malicious Claude npm package04:30 How AI tools are creating new attack paths08:06 Systemic issues and trust problems in npm10:44 Sooraj Shah on the State of AI in Security & Development14:01 Accountability, optimism, and the future of AI security | — | ||||||
| 10/29/25 | Episode 13: Malicious VS Code Extensions & The Future of AI Security | In this episode of Secure Disclosure, host Mackenzie Jackson explores the growing threat of malicious VS Code extensions with Rami McCarthy from Wiz and Charlie Eriksen from Aikido Security, diving into how leaked secrets and clever obfuscation put developers at risk. Later, Patrick Debois, the “Godfather of DevOps,” joins to discuss the rise of AI-native development, how it mirrors past DevOps shifts, and what it means for the future of secure software.Links: Original Post from Aikido: https://www.linkedin.com/feed/update/urn:li:activity:7384986044867256320Wiz Security Research on VS Code https://www.wiz.io/blog/supply-chain-risk-in-vscode-extension-marketplaces Rami McCarthy LinkedIn: https://www.linkedin.com/in/ramimac/Patrick Debois LinkedIn: https://www.linkedin.com/in/patrickdebois/Charlie Erkson Linkedin: https://www.linkedin.com/in/charlie-eriksen-a318578/Chapters00:00 — Introduction01:10 — Malicious VS Code Extensions06:00 — Leaked Secrets & Supply Chain Risk15:00 — npm Security Updates & SafeChain19:00 — The Future of AI Development | — | ||||||
| 10/16/25 | Building, Investing, and the Future of AI: Maarten Mortier on the New Era of Venture Capital | In this episode of Cyber & Sake, host Mackenzie Jackson sits down with Maarten Mortier, former CTO of Shopad, now co-founder and managing partner at Entourage VCThey discuss Maarten’s early love for programming, how Ghent became a thriving European tech hub, and why builders make the best investors. Maarten shares his insights into what he looks for during startup due diligence, how AI is reshaping both development and venture capital, and why healthy security should be baked into company culture — not siloed off.This is a deep and candid conversation about technology, product, and philosophy — from scaling startups to the evolving role of AI in coding, investing, and innovation.Pour yourself a glass of sake and join us for an episode that blends code, capital, and curiosity.⏱️ Chapter ListTime Chapter Title00:00 Introductions & Sake Tasting01:10 From Early Coding Days to CTO Success04:07 Why Ghent is Becoming a European Tech Hub07:58 Building and Investing: The Story of Entourage VC11:02 Inside VC Due Diligence and the Founder Relationship18:03 Tech Health, Security, and Red Flags for Startups25:16 What Makes a Real Moat in the Age of AI32:03 AI, Product Building, and the Future of Venture Capital39:36 Final Thoughts, Security Advice & The Sake Game | — | ||||||
Showing 25 of 35
Sponsor Intelligence
Sign in to see which brands sponsor this podcast, their ad offers, and promo codes.
Chart Positions
1 placement across 1 market.
Chart Positions
1 placement across 1 market.













