
Insights from recent episode analysis
Audience Interest
Podcast Focus
Publishing Consistency
Platform Reach
Insights are generated by CastFox AI using publicly available data, episode content, and proprietary models.
Most discussed topics
Brands & references
Total monthly reach
Estimated from 5 chart positions in 5 markets.
By chart position
- 🇬🇧GB · Education#5130K to 100K
- 🇨🇦CA · Education#5730K to 100K
- 🇺🇸US · Education#1155K to 30K
- 🇳🇿NZ · Education#4710K to 30K
- 🇮🇪IE · Education#773K to 10K
- Per-Episode Audience
Est. listeners per new episode within ~30 days
39K to 135K🎙 ~2x weekly·48 episodes·Last published yesterday - Monthly Reach
Unique listeners across all episodes (30 days)
78K to 270K🇬🇧37%🇨🇦37%🇺🇸11%+2 more - Active Followers
Loyal subscribers who consistently listen
31K to 108K
Market Insights
Platform Distribution
Reach across major podcast platforms, updated hourly
Total Followers
—
Total Plays
—
Total Reviews
—
* Data sourced directly from platform APIs and aggregated hourly across all major podcast directories.
On the show
From 1 epsHost
Recent guests
No guests detected in recent episodes.
Recent episodes
Agent Pentest Benchmarking | Episode 52
May 14, 2026
17m 32s
AI and Bug Bounties | Episode 51
May 11, 2026
13m 49s
Vercel Breach | Episode 50
May 1, 2026
17m 46s
Claude Mythos | Episode 49
Apr 24, 2026
25m 40s
Holocron OpenBrain with Alex Minster | Episode 48
Apr 22, 2026
51m 08s
Social Links & Contact
Official channels & resources
Official Website
Login
RSS Feed
Login
| Date | Episode | Topics | Guests | Brands | Places | Keywords | Sponsor | Length | |
|---|---|---|---|---|---|---|---|---|---|
| 5/14/26 | ![]() Agent Pentest Benchmarking | Episode 52 | In this episode of BHIS Presents: AI Security Ops, the team breaks down a new benchmarking framework designed to evaluate AI pentesting agents against real-world offensive security scenarios.What began as experimental evaluation of “can AI hack?” has quickly shifted into something much closer to operational reality. Organizations are now seeing a surge in agentic tooling and automated pentesting workflows, where human-guided AI systems consistently outperform fully autonomous agents in complex, unsupervised environments.As AI tooling evolves, teams must balance speed with validation, monitoring, and oversight as offensive capabilities outpace defenses.We dig into:The new “AutoPenBench” framework for benchmarking AI pentesting agentsWhy fully autonomous AI hacking only achieved a 21% success rateHow human-assisted AI workflows increased success rates to 64%Testing AI agents against Log4Shell, Heartbleed, Spring4Shell, and classic web exploitsWhy modern offensive AI systems still require heavy human oversight and validationHow custom internal AI frameworks are already finding vulnerabilities humans missedThe operational role of prompt engineering, scaffolding, and agent memoryReal examples of AI agents mis-scoping infrastructure and chasing irrelevant targetsHow AI lowers the barrier for ransomware operations and offensive capability developmentWhy defensive teams need stronger edge visibility, packet capture, and AI-aware monitoring strategies⸻📚 Key Concepts & TopicsAI Pentesting & Agentic SecurityAutonomous AI hacking agentsAgentic AI workflowsAI-assisted penetration testingOffensive security automationBenchmarking & EvaluationAutoPenBenchAI security benchmarkingHuman-in-the-loop validationLong-horizon task evaluationOffensive Security OperationsSQL injectionPath traversalLog4Shell / Heartbleed / Spring4ShellKali Linux offensive toolingAI Infrastructure & Model OperationsPrompt engineeringPersistent agent memoryRoleplay jailbreak techniquesGuardrail reduction strategiesDefensive Security StrategyDefense in depthEdge network monitoringZeek network analysisPacket capture visibilityIndustry & Threat ImplicationsAI-enabled ransomware operationsAI-assisted red teamingInfrastructure scoping failures Operational scalability challenges#AISecurity #CyberSecurity #Pentesting #AIAgents #RedTeam #EthicalHacking #CyberDefense----------------------------------------------------------------------------------------------(00:00) - Video Intro and Sponsor (01:20) - Al Pentesting Benchmark Overview (02:11) - How AutoPenBench Works (03:44) - Real World Results and Experience (05:16) - Real World Results and Experience (06:48) - Human and Al Collaboration (07:38) - Improving Al Agent Workflows (08:56) - Model Limitations and Updates (10:35) - Jailbreaks and Model Guardrails (13:16) - Provider Controls and Trust Factors (14:41) - Lower Barrier for Cyber Attacks (15:39) - Defensive Security Implications (16:59) - Why Red Teams Need Al Now Click here to watch this episode on YouTube. Creators & Guests Brian Fehrman - Host Derek Banks - Host Brought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript. | 17m 32s | ||||||
| 5/11/26 | ![]() AI and Bug Bounties | Episode 51 | In this episode of BHIS Presents: AI Security Ops, the team breaks down a growing problem in cybersecurity: AI-generated bug bounty “slop” overwhelming the system.What started as a powerful way to crowdsource vulnerability discovery is now hitting a breaking point. Programs like cURL’s bug bounty and platforms like HackerOne are seeing a massive surge in submissions — but fewer and fewer of them are actually valid.The result? Security teams spending hours reviewing reports that go nowhere, while real vulnerabilities risk getting buried in the noise.We dig into:• Why cURL shut down its bug bounty program after years of success• How valid reports dropped from 1-in-6 to 1-in-20• What “death by a thousand slops” actually looks like in practice• How AI is flooding programs with low-quality vulnerability reports• The difference between “theoretical” vs. exploitable vulnerabilities• Why reviewing findings is now harder than generating them• How HackerOne is responding to the surge in submissions• Whether AI can be used to filter AI-generated noise• The role of reproducibility and proof-of-impact in triage• Why human expertise still matters in vulnerability validationThis episode explores a critical shift in security operations: when vulnerability discovery becomes cheap and automated, validation and triage become the real bottleneck.⸻📚 Key Concepts & TopicsBug Bounty Programs & Triage• Submission quality vs. volume imbalance• Signal-to-noise challenges in vulnerability pipelines• The growing burden of manual validationAI in Vulnerability Discovery• Automated scanning vs. real exploitability• AI-generated findings and false positives• The “editor’s dilemma” — review vs. generationAI Security Risks• Lower barrier to entry for vulnerability discovery• Over-reliance on AI without domain expertise• Flooding systems with low-quality submissionsDefensive Strategy• Requiring reproducible steps and proof-of-impact• Using AI to pre-filter vulnerability reports• Combining human expertise with AI toolingIndustry Impact• cURL bug bounty shutdown• HackerOne submission pause• Shifting economics of vulnerability research#AISecurity #BugBounty #CyberSecurity #LLMSecurity #ArtificialIntelligence #InfoSec #BHIS #AIAgents #AppSec----------------------------------------------------------------------------------------------(00:00) - Intro: Bug Bounty Burnout & AI Noise (01:14) - cURL Kills Its Bug Bounty Program (02:05) - “Death by a Thousand Slops” Explained (03:42) - AI vs Vulnerability Scanners: Signal vs Noise (04:38) - HackerOne Pauses Submissions & Industry Impact (05:41) - Can AI Filter AI? Proposed Solutions (07:49) - Why Humans Still Matter in Validation (12:55) - Final Takeaway: AI as a Tool, Not a Replacement Click here to watch this episode on YouTube. Creators & Guests Ethan Robish - Guest Bronwen Aker - Host Brian Fehrman - Host Derek Banks - Host Brought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript. | 13m 49s | ||||||
| 5/1/26 | ![]() Vercel Breach | Episode 50✨ | cybersecuritydata breach+5 | — | RobloxContext.ai+3 | — | Vercel breachAI security+6 | — | 17m 46s | |
| 4/24/26 | ![]() Claude Mythos | Episode 49 | In this episode of BHIS Presents: AI Security Ops, the team breaks down Claude Mythos Preview — Anthropic’s unreleased frontier model that may represent a turning point in AI-powered cybersecurity.What started as a controlled research release under Project Glasswing has quickly become one of the most controversial developments in AI security. Mythos isn’t just better at finding vulnerabilities — it’s operating at a scale and depth that challenges long-held assumptions about how quickly software can be broken… and whether it can realistically be fixed.From leaked internal documents to real-world exploit generation, this episode explores what happens when vulnerability discovery becomes cheap, fast, and automated — while remediation remains slow, manual, and human-bound.The result? A growing asymmetry that could fundamentally reshape the security landscape.We dig into:• What Claude Mythos Preview is and why it was withheld from the public• The leaks that exposed its existence and capabilities• How Project Glasswing is positioning AI for defensive use• Real-world vulnerability discoveries made by the model• The “vulnpocalypse” problem: discovery vs. remediation imbalance• Emerging AI behaviors that raise containment concerns• How attackers are already leveraging AI for offensive operations• The access control dilemma: who gets to use models like this?• Why patching — not discovery — is now the primary bottleneck• What defenders must do to prepare for AI-accelerated exploitationThis episode explores a critical shift in cybersecurity: when vulnerability discovery scales faster than human response, the entire defensive model starts to break down.⸻📚 Key Concepts & TopicsAI-Powered Vulnerability Discovery• Autonomous exploit generation and chaining• Benchmark performance vs. prior models• AI-assisted offensive security workflowsAI Security Risks• Discovery vs. remediation asymmetry• AI-driven vulnerability scaling• Offensive use by nation-states and cybercriminalsModel Behavior & Safety• Emergent autonomy and sandbox escape concerns• Evaluation awareness and deceptive behaviors• Limits of containment and alignmentDefensive Strategy & Readiness• Patch velocity as the new bottleneck• AI-assisted vulnerability management• Open-source ecosystem risk exposureAI Governance & Industry Response• Restricted model releases and access control• Regulatory and financial sector concerns• The future of AI capability containment#AISecurity #CyberSecurity #ArtificialIntelligence #LLMSecurity #BHIS #AIThreats #InfoSec #AIAgents #CyberDefense(00:00) - Intro & Show Overview (01:00) - Sponsors, Hosts, and Episode Setup (01:53) - What Is Claude Mythos Preview? (03:04) - The Leak, Project Glasswing, and Restricted Access (07:53) - Capabilities: Exploits, Benchmarks, and Breakthroughs (09:16) - Real-World Vulnerabilities & “Vulnpocalypse” Concerns (14:47) - Access Control, Threat Actors, and Emerging Risks (21:38) - Defensive Strategy: Patching, AI Tools, and What Comes Next (23:08) - Defensive Strategy: Patching, AI Tools, and What Comes Next Click here to watch this episode on YouTube. Creators & Guests Derek Banks - Host Bronwen Aker - Host Brian Fehrman - Host Brought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript. | 25m 40s | ||||||
| 4/22/26 | ![]() Holocron OpenBrain with Alex Minster | Episode 48 | In this episode of BHIS Presents: AI Security Ops, the team is joined by Alex Minster to demo his project: HOLOCRON OpenBrain with — a persistent, model-agnostic memory layer designed to solve one of the biggest frustrations in AI workflows.Instead of starting from scratch every time you open a new chat, Alex’s approach creates a centralized “brain” that multiple AI models can connect to, allowing context, notes, and intelligence to persist across sessions, tools, and even platforms.The result? A flexible system that captures thoughts, ingests threat intel, and generates structured outputs — all without locking you into a single AI provider.We dig into:• The “cold start” problem in AI and why it breaks real workflows• What the OpenBrain HOLOCRON is (and isn’t)• How centralized memory changes the way we interact with AI tools• The architecture: Supabase, OpenRouter, MCP, and multi-model access• Using Discord as a lightweight ingestion pipeline for persistent memory• Real-world CTI workflows: capturing intel and generating reports on demand• Managing, editing, and superseding memory over time• The tradeoffs between context richness and security exposure• Multi-model reliability differences (and why they matter)• Practical setup: what it takes to build your own systemThis episode highlights a shift in how AI is used operationally: moving from isolated chats to persistent, structured memory systems that can evolve alongside your work.⸻📚 Key Concepts & TopicsPersistent AI Memory• Solving the “cold start” problem• Centralized context across multiple models• Structured vs raw data ingestionAI Architecture & Tooling• Supabase as a backend memory store• OpenRouter for multi-model access• MCP protocol for integrationsCyber Threat Intelligence (CTI)• Capturing, tagging, and prioritizing intel• Generating automated reports and dashboards• Context-aware intelligence workflowsSecurity & Privacy• Need-to-know data design• Avoiding overexposure via full integrations (email, docs, etc.)• Auditing and removing sensitive dataOperational Workflows• Capturing ideas, notes, and research• Multi-project memory segmentation (“multiple brains”)• Using AI to accelerate—not replace—analysis🔗 HOLOCRON GitHub Guide: https://github.com/belouve/open-brain-holocron🔗 Alex Minster: https://www.linkedin.com/in/alexminster/#AISecurity #CyberSecurity #AIWorkflows #LLM #ThreatIntel #DevSecOps #BHIS #OpenSource #AIEngineering(00:00) - Intro & Guest Introduction (Alex Minster) (00:55) - What Is the OpenBrain HOLOCRON? (Cold Start Problem) (03:00) - How It Works: Centralized Memory & AI Integration (05:30) - Architecture & Free-Tier Stack (Supabase, OpenRouter, MCP) (07:54) - Demo: Capturing Thoughts via Discord (10:55) - CTI Use Case: Prioritizing & Querying Intelligence (15:03) - Managing Memory: Editing, Deleting & Superseding Data (19:04) - Running Protocols: Automated CTI Reports (Demo) (22:05) - Multi-Brain Concept & Segmentation (25:00) - Real-World Output: Reports, Dashboards & Briefings (31:31) - Multi-Model Differences (Claude vs ChatGPT) (35:55) - Improving the System with Feedback Loops (37:29) - How to Build Your Own OpenBrain (41:26) - Real-World Benefits & Workflow Improvements (45:44) - Security Considerations & Data Exposure Risks (47:20) - Where to Find the Project & Contribute (50:16) - Final Thoughts & Wrap-Up Click here to watch this episode on YouTube. Creators & Guests Bronwen Aker - Host Alex Minster "Belouve" - Guest Ethan Robish - Guest Brian Fehrman - Host Brought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript. | 51m 08s | ||||||
| 4/13/26 | ![]() LiteLLM Supply Chain Compromise | Episode 47 | In this episode of BHIS Presents: AI Security Ops, the team breaks down the LiteLLM supply chain compromise–a real-world attack that shows how AI systems are being breached through the same old software supply chain weaknesses.What initially looked like a bad release quickly escalated into a full-scale compromise affecting a library downloaded millions of times per day. But LiteLLM wasn’t the starting point–it was just one link in a much larger attack chain involving compromised security tools, CI/CD pipelines, and stolen publishing credentials.The result? Malicious packages distributed at scale, harvesting secrets, enabling lateral movement, and establishing persistence across affected systems.We dig into:• What LiteLLM is and why it’s such a high-value target• How the attack chain started with compromised security tooling (Trivy, Checkmarx)• How unpinned dependencies enabled the compromise• The role of CI/CD pipelines in exposing sensitive credentials• What the malicious LiteLLM packages actually did (credential harvesting, persistence, lateral movement)• The scale of impact given LiteLLM’s widespread adoption• Why supply chain attacks are no longer theoretical–and no longer nation-state exclusive• How AI is lowering the barrier to entry for attackers• Why this wasn’t really an “AI vulnerability”–but an infrastructure failure• The growing risk of automated, agent-driven attack discoveryThis episode highlights a critical reality: the biggest risks in AI systems aren’t always in the models–they’re in the pipelines, dependencies, and infrastructure surrounding them.⸻📚 Key Concepts & TopicsSupply Chain Security• Dependency poisoning and malicious package distribution• CI/CD pipeline compromise• Version pinning and build integrityCredential & Secrets Exposure• API keys, SSH keys, and cloud credentials in pipelines• Risks of centralized AI gateways like LiteLLMThreat Actor Techniques• Tag rewriting and trusted reference hijacking• Multi-stage malware (harvest, lateral movement, persistence)• Use of lookalike domains for exfiltrationAI & Security Reality Check• AI as an amplifier, not the root vulnerability• Traditional security failures in modern AI stacks• Automation lowering attacker barriersDefensive Strategies• Dependency pinning and isolation (Docker, VPS)• Atomic credential rotation• Treating CI/CD tools as critical infrastructure• Monitoring outbound traffic from build environments(00:00) - Intro & Incident Overview (01:26) - What Is LiteLLM & Why It Matters (03:53) - Supply Chain Scope & Why This Is Dangerous (07:31) - Why These Attacks Are Getting Easier (AI + Scale) (10:48) - Attack Chain Breakdown (Trivy → Checkmarx → LiteLLM) (11:50) - What the Malware Did & Impact at Scale (14:23) - Detection, Response & Who Was Safe Click here to watch this episode on YouTube. Creators & Guests Brian Fehrman - Host Bronwen Aker - Host Derek Banks - Host Brought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript. | 19m 32s | ||||||
| 4/2/26 | ![]() Model Ablation | Episode 46 | In this episode of BHIS Presents: AI Security Ops, the team breaks down model ablation — a powerful interpretability technique that’s quickly becoming a serious concern in AI security.What started as a way to better understand how models work is now being used to remove safety mechanisms entirely. By identifying and disabling specific components inside a model, researchers — and attackers — can effectively strip out refusal behavior while leaving the rest of the model fully functional.The result? A fast, reliable way to “de-safety” AI systems without prompt engineering, fine-tuning, or significant compute.We dig into:• What model ablation is and how it works• The difference between ablation and pruning• How safety behaviors can be isolated inside model internals• Why refusal mechanisms are often localized (and fragile)• How ablation is being used as a jailbreak technique• Why this is more reliable than prompt-based attacks• Risks specific to open-weight models and public checkpoints• The growing “uncensored model” ecosystem• Why interpretability is a double-edged sword• Whether safety should be deeply embedded into model architecture• What this means for defenders and AI security strategyThis episode explores a critical shift in AI risk: when safety controls can be surgically removed, they stop being security controls at all.⸻📚 Key Concepts & TopicsModel Internals & Interpretability• Neurons, attention heads, and residual stream analysis• Activation space and feature directionsAI Security Risks• Prompt injection vs. structural attacks• Jailbreaking techniques and safety bypassesModel Access & Risk Surface• Open-weight vs. API-only models• Hugging Face and the uncensored model ecosystemAI Safety & Governance• Defense-in-depth for AI systems• Future standards for ablation resistance#AISecurity #ModelAblation #LLMSecurity #CyberSecurity #ArtificialIntelligence #AIResearch #BHIS #AIAgents #InfoSec(00:00) - Intro & Show Overview (01:27) - Removing AI Safety Mechanisms (02:05) - What Is Model Ablation? (Technical Breakdown) (04:01) - Open-Weight Models & Practical Limitations (05:43) - Risks, Use Cases, and Ethical Tradeoffs (07:32) - Security Implications & “You Can’t Ban Math” (10:43) - Future Impact: Open Models Catching Up (17:44) - Final Takeaway: Why “No” Isn’t Security Click here to watch this episode on YouTube. Creators & Guests Bronwen Aker - Host Derek Banks - Host Brian Fehrman - Host Brought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript. | 18m 17s | ||||||
| 3/26/26 | ![]() Embedding Space Attacks | Episode 45 | In this episode of BHIS Presents: AI Security Ops, the team explores embedding space attacks — a lesser-known but increasingly important threat in modern AI systems — and how attackers can manipulate the mathematical foundations of how models understand data.Unlike prompt injection, which targets instructions, embedding attacks operate at a deeper level by influencing how data is represented, retrieved, and interpreted inside vector spaces. By subtly altering embeddings or poisoning data sources, attackers can manipulate AI behavior without ever touching the model directly.Through a hands-on walkthrough of a custom notebook with rich visualizations, this episode breaks down how embeddings work, why they are critical to LLM-powered systems like RAG pipelines, and how attackers can exploit them in real-world scenarios.We dig into:- What embeddings are and how AI systems convert text into numerical representations- How vector spaces enable similarity search and retrieval in LLM applications- What embedding space attacks are and why they matter for AI security- How small perturbations in data can drastically change model behavior- The risks of poisoned data in RAG and vector databases- How attackers can influence search results and downstream AI outputs- Why these attacks are subtle, hard to detect, and often overlooked- The role of visualization in understanding embedding behavior- Real-world implications for AI-powered applications and workflows- Defensive considerations when building with embeddings and vector storesThis episode focuses on the foundational layer of AI systems, showing how security risks extend beyond prompts and into the underlying data representations that power modern AI.⸻📚 Key Concepts CoveredAI Foundations- Embeddings and vector representations- Similarity search and vector space reasoningAI Security Risks- Embedding space manipulation- Data poisoning in vector databases- Retrieval manipulation in RAG systemsApplications & Impact- LLM-powered search and assistants- AI pipelines using embeddings- Risks in production AI systems#AISecurity #Embeddings #CyberSecurity #LLMSecurity #AIThreats #BHIS #AIAgents #ArtificialIntelligence #InfoSecJoin the 5,000+ cybersecurity professionals on our BHIS Discord server to ask questions and share your knowledge about AI Security. https://discord.gg/bhis(00:00) - Intro & Episode Overview (01:39) - What Are Embeddings? (AI Only Understands Numbers) (03:44) - The Embedding Process (Text → Vectors) (07:43) - Similarity, Classification & Vector Math (09:55) - Visualizing Embedding Space (2D Projection) (14:29) - Classifiers (15:39) - Playing Games with Information (18:06) - Attack Techniques: Synonyms & Context Manipulation (20:29) - Context Padding (27:10) - Collision Attacks, Defenses & Final Thoughts Click here to watch this episode on YouTube. Creators & Guests Brian Fehrman - Host Bronwen Aker - Host Derek Banks - Host Brought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript. | 33m 05s | ||||||
| 3/19/26 | ![]() Indirect Prompt Injection | Episode 44 | In this episode of BHIS Presents: AI Security Ops, the team breaks down indirect prompt injection — the #1 risk in the OWASP Top 10 for LLM Applications — and why it represents one of the most dangerous and misunderstood threats in modern AI systems.Unlike traditional attacks, indirect prompt injection doesn’t require malware, credentials, or even user interaction. Instead, attackers hide malicious instructions inside everyday content like emails, documents, or web pages — and wait for AI systems to unknowingly execute them.From real-world exploits like EchoLeak to in-the-wild attacks observed by Palo Alto Unit 42, this episode explores how attackers are already abusing AI-powered tools in production environments — and why current defenses are struggling to keep up.We dig into:• What indirect prompt injection is and how it differs from direct attacks• Why OWASP ranks prompt injection as the #1 LLM security risk• How attackers hide payloads inside emails, documents, and web content• The EchoLeak zero-click exploit against Microsoft 365 Copilot• Web-based prompt injection attacks observed in the wild (Unit 42)• Exploits targeting AI coding tools like Cursor IDE and GitHub Copilot• How RAG systems amplify the risk through poisoned knowledge bases• Why LLM architecture makes this problem fundamentally hard to solve• Research showing modern defenses still fail 50%+ of the time• Practical mitigation strategies: least privilege, human-in-the-loop, and observabilityThis episode focuses on the real-world security implications of AI adoption, showing how attackers are already leveraging these techniques — and what defenders need to understand as AI becomes deeply embedded in business workflows.⸻📚 Key ReferencesPrompt Injection & LLM Risk• OWASP Top 10 for LLM Applications 2025 — https://owasp.orgReal-World Attacks• EchoLeak (CVE-2025-32711) — Aim Security / arXiv• Unit 42 — Web-Based Indirect Prompt Injection in the Wild (March 2026) — https://unit42.paloaltonetworks.comAI System Vulnerabilities• Cursor IDE (CVE-2025-59944)• GitHub Copilot (CVE-2025-53773)• Lakera — Zero-Click MCP Attack — https://lakera.aiResearch on Defenses• Zhan et al. — Adaptive Attacks Break Defenses (NAACL 2025)• Anthropic System Card (Feb 2026)• Google Gemini Security Research (2025)Standards & Guidance• NIST AI Risk Management Framework — https://nist.gov• MITRE ATLAS — https://atlas.mitre.org• ISO/IEC 42001 AI Management Systems#AISecurity #PromptInjection #CyberSecurity #LLMSecurity #AIThreats #BHIS #AIAgents #ArtificialIntelligence #infosec (00:00) - Intro & BHIS / Antisyphon Overview (01:19) - OWASP Top 10 & Prompt Injection Context (01:41) - Indirect Prompt Injection Explained (Stored Attack Analogy) (02:54) - Real-World Attack Scenarios (Calendar & Hidden Payloads) (05:10) - EchoLeak & Zero-Click Copilot Exploit (06:10) - Weaponized Excel Prompt Injection PoC (06:50) - Email Injection & AI Summarization Abuse (09:07) - Why Detection & Prevention Are So Difficult (14:02) - Mitigations & Final Thoughts Click here to watch this episode on YouTube. Creators & Guests Derek Banks - Host Brian Fehrman - Host Brought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript. | 16m 10s | ||||||
| 3/12/26 | ![]() Top AI Security Concerns | Episode 43 | In this episode of BHIS Presents: AI Security Ops, Bronwen Aker and Dr. Brian Fehrman break down some of the top AI security concerns being discussed by researchers, security firms, and government agencies this year.As AI capabilities rapidly expand, so does the attack surface. From agentic AI systems being used by attackers, to deepfakes at industrial scale, to the persistent challenge of prompt injection, security teams are trying to understand what risks are real, what’s hype, and where defenders should focus first.We dig into:- Why agentic AI is emerging as a major security concern- How attackers could weaponize autonomous agents to scale operations- The risk of malicious agent skills and AI supply chain attacks- Why overly broad permissions make agent-based systems dangerous- AI-assisted phishing campaigns and social engineering at scale- The rise of deepfakes and corporate fraud driven by generative AI- Why humans still struggle to reliably detect deepfake media- The economics of deepfake fraud and real-world incidents- Prompt injection attacks and why they remain difficult to solve- Whether future models may autonomously discover and exploit jailbreaksThis episode looks at the practical security implications of today’s AI ecosystem — where the biggest risks are coming from, how attackers may leverage AI systems, and what defenders should be thinking about as these technologies continue to evolve.📚 Key ReferencesAgentic AI Threats- CrowdStrike 2026 Global Threat Report — https://www.crowdstrike.com- IBM X-Force 2026 Threat Intelligence Index — https://www.ibm.com/security/x-force- Cisco State of AI Security 2026 — https://www.cisco.com/site/us/en/products/security/state-of-ai-security.html#tabs-9da71fbd27-item-1288c79d71-tabDeepfakes & AI-Driven Fraud- WEF Global Cybersecurity Outlook 2026 — https://www.weforum.org/publications/global-cybersecurity-outlook-2026/- International AI Safety Report 2026 — https://www.internationalaisafetyreport.orgAI Security & Infrastructure Risk- CISA Joint Guidance on AI in OT — https://www.cisa.gov/news-events/news/new-joint-guide-advances-secure-integration-artificial-intelligence-operational-technologyPrompt Injection & LLM Exploitation- Schneier et al., “The Promptware Kill Chain” — https://www.lawfaremedia.org/article/the-promptware-kill-chain- Palo Alto Unit 42 — “Fooling AI Agents: Web-Based Indirect Prompt Injection Observed in the Wild”https://unit42.paloaltonetworks.com/indirect-prompt-injection-ai-agents/(00:00) - Intro & Episode Overview (02:18) - Agentic AI as a Security Threat (CrowdStrike 2026 Global Threat Report, IBM X-Force Index) (03:46) - Malicious Agent Skills & AI Supply Chain Attacks (Cisco State of AI Security) (04:58) - How Agent Skills Actually Work (07:47) - Permissions & Guardrails for AI Agents (CISA AI in OT Guidance) (09:57) - AI-Generated Phishing Campaigns (CrowdStrike / IBM Threat Reports) (13:58) - Deepfakes at Industrial Scale (WEF Global Cybersecurity Outlook) (15:38) - Corporate Fraud & Deepfake Incidents (International AI Safety Report) (17:21) - Why Humans Struggle to Detect Deepfakes (21:13) - Prompt Injection Attacks Explained (Schneier – Promptware Kill Chain) (24:35) - AI Models Jailbreaking Other Models (Palo Alto Unit 42 Research) (28:59) - Final Thoughts & Wrap-Up Click here to watch this episode on YouTube. Creators & Guests Bronwen Aker - Host Brian Fehrman - Host Brought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript. | 29m 11s | ||||||
Want analysis for the episodes below?Free for Pro Submit a request, we'll have your selected episodes analyzed within an hour. Free, at no cost to you, for Pro users. | |||||||||
| 3/6/26 | ![]() Claude Cowork Discussion | Episode 42 | We discuss the meaning of AI life In episode 42 of "BHIS Presents: AI Security Ops." Derek Banks is joined by Bronwen Aker and Brian Fehrman to break down Anthropic’s latest agentic desktop experiment: Claude Cowork.Claude Cowork brings large language models directly onto the endpoint — giving Claude the ability to read, write, and organize files on your local machine. It’s designed to make powerful AI workflows accessible to non-technical users… but as with any tool that operates at the OS level, the security implications are significant.We explore what happens when AI moves closer to your data, your filesystem, and your browser — and what that means for defenders.We dig into:- What Claude Cowork is and how it differs from Claude Code- Agentic desktop tools vs. command-line workflows- Local file access and OS-level interaction risks- Skills, automation, and task iteration- Chrome plugins and expanded attack surface- Overly broad permissions and least-privilege concerns- SaaS disruption and shifting trust boundaries- Endpoint monitoring challenges- The speed of AI releases vs. security review cycles- Balancing innovation with responsible deploymentThis conversation looks at the real-world operational and defensive considerations of agentic AI tools running directly on user systems. If you’re evaluating AI productivity tools inside your organization — or defending environments where they’re already being adopted — this episode will help you think through the risks and tradeoffs.(00:00) - Intro & Episode Overview (02:08) - What Is Claude Cowork? (04:03) - Desktop Agents vs. Command Line Users (06:12) - Agentic Workflows & Task Automation (08:08) - Building Fast with Claude (Speed of Development) (09:29) - Browser Plugins & Expanding Capabilities (11:06) - Permission Models & “Just Give It Access to Everything” (12:40) - SaaS Disruption & Enterprise Impact (14:38) - Overly Broad File Access Risks (16:27) - Organizational Disruption & Workforce Impact (18:09) - Security Lag vs. Rapid AI Releases (19:46) - Final Thoughts & Wrap-Up Click here to watch this episode on YouTube. Creators & Guests Derek Banks - Host Bronwen Aker - Host Brian Fehrman - Host Brought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript. | 21m 33s | ||||||
| 2/26/26 | ![]() OpenClaw and Moltbook with Guests Beau Bullock and Hayden Covington | Episode 41 | In this episode of BHIS Presents: AI Security Ops, we’re joined by Beau Bullock and Hayden Covington to unpack one of the most talked-about AI agent experiments in recent memory: OpenClaw and its companion platform, Moltbook.OpenClaw exploded onto the scene as an autonomous AI agent capable of operating Claude Code from the command line — executing tasks, monitoring output, and iterating with minimal human involvement. Shortly after, Moltbook emerged as a social platform designed specifically for AI agents to interact with one another.But as with most cutting-edge AI experiments, things moved fast… and broke fast.We dig into:What OpenClaw actually is and how it worksAI agents operating other AI systems (Claude Code in the loop)The concept of “skills” and extending agent capabilitiesThe one-click RCE vulnerability discovered shortly after releaseMoltbook as a social network for AI agentsAPI keys, agent-only access, and how humans bypassed itBeacons, autonomy, and what “control” really meansWhere the line is between automation and true autonomyShort-term workforce impacts vs. long-term AI riskThis conversation moves beyond hype into the practical and security implications of rapidly deployed autonomous agents. If you’re experimenting with AI agents — or defending against them — this episode will give you a grounded perspective on what’s possible today, what’s fragile, and what’s coming next.(00:00) - Intro & Guest Welcome (01:38) - AI Agents in the News (03:23) - From “Moltbot” to OpenClaw (04:13) - What Is OpenClaw? How It Works (05:13) - Claude Code + Agent-in-the-Middle Model (07:36) - Extending OpenClaw with Skills (08:42) - Release Timeline & Rapid Adoption (10:16) - One-Click RCE in OpenClaw (11:45) - Introducing Moltbook (AI Social Network) (14:03) - How Moltbook Actually Worked (17:55) - “I Am a Robot” & Agent Authentication (20:28) - Beaconing & Operational Behavior (26:44) - Automation vs. True Autonomy (27:26) - Control, Kill Switches & Agent Boundaries (30:59) - Workforce Impact & Near-Term Concerns (35:34) - AI Apocalypse? Final Thoughts & Wrap-Up Click here to watch this episode on YouTube. Creators & Guests Beau Bullock - Guest Hayden Covington - Guest Derek Banks - Host Brian Fehrman - Host Bronwen Aker - Host Brought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript. | 36m 00s | ||||||
| 2/20/26 | ![]() AI in the SOC: Interview with Hayden Covington and Ethan Robish from the BHIS SOC | Episode 40 | AI in the SOC: Interview with Hayden Covington and Ethan Robish from the BHIS SOC | Episode 40In this episode of BHIS Presents: AI Security Ops, we sit down with Hayden Covington and Ethan Robish from the BHIS Security Operations Center (SOC) to explore how AI is actually being used in modern defensive operations.From foundational machine learning techniques like statistical baselining and clustering to large language models assisting with alert triage and reporting, we dig into what works, what doesn’t, and what SOC teams should realistically expect from AI today.We break down:- How AI helps reduce alert fatigue and improve triage- Practical automation inside a real-world SOC- The difference between traditional ML approaches and LLM-powered workflows- Foundational techniques like K-means, anomaly detection, and behavioral baselining- Using LLMs for enrichment, investigation, and report drafting- Where AI struggles: hallucinations, inconsistency, and edge cases- Risks around over-trusting AI in security operations- How to responsibly integrate AI into analyst workflowsThis episode is grounded in real operational experience—not vendor demos. If you’re running a SOC, building AI tooling, or just trying to separate hype from reality, this conversation will help you think clearly about augmentation vs. automation in defensive security.(00:00) - Intro & Guest Introductions (04:44) - Alert Triage & SOC Pain Points (06:04) - Automation Inside the SOC (09:59) - “Boring AI”: Clustering, Baselining & Statistics (17:06) - AI-Assisted Reporting & Client Communication (18:34) - Limitations, Edge Cases & Model Risk (22:56) - Hallucinations & Inconsistent Outputs (25:04) - AI Demos vs. Real-World Security Work (28:35) - Final Thoughts & Closing Click here to watch this episode on YouTube. Creators & Guests Hayden Covington - Guest Ethan Robish - Guest Bronwen Aker - Host Derek Banks - Host Brian Fehrman - Host Brought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com | 29m 28s | ||||||
| 2/12/26 | ![]() AI News | Episode 39 | AI News | Episode 39In this episode of AI Security Ops, we break down the latest developments in AI-driven threats, identity chaos caused by autonomous agents, NIST’s focus on securing AI in critical infrastructure, and new visibility tooling for AI exposure.We cover real-world abuse of LLMs for phishing, how AI agents are colliding with IAM governance, and what defenders should be watching right now.Chapters:00:00 – Introduction and SponsorsBlack Hills Information Security - https://www.blackhillsinfosec.com/Antisyphon Training - https://www.antisyphontraining.com/01:08 – LLM-Generated Phishing JavaScript (Unit 42 / Palo Alto)Discussion begins as the hosts introduce the first story.How LLMs are generating polymorphic malicious JavaScript for phishing pages and evading traditional detection.👉 https://unit42.paloaltonetworks.com/real-time-malicious-javascript-through-llms/08:49 – AI Agents vs IAM: “Who Approved This Agent?” (Hacker News)Conversation shifts to agent privilege management and governance failures.👉 https://thehackernews.com/2026/01/who-approved-this-agent-rethinking.html10:07 – NIST Focus on Securing AI Agents in Critical InfrastructureDiscussion on federal guidance and why AI agents are being treated as critical infrastructure risk components.👉 https://www.linkedin.com/pulse/cybersecurity-institute-news-roundup-20-january-2026-entrust-alz7c13:44 – Tenable One AI ExposureBreaking down Tenable’s push into enterprise AI usage visibility and exposure management.👉 https://www.tenable.com/blog/tenable-one-ai-exposure-secure-ai-usage-at-scaleJoin the 5,000+ cybersecurity professionals on our BHIS Discord server to ask questions and share your knowledge about AI Security. https://discord.gg/bhisChapters(00:00) - Introduction and Sponsors (01:08) - LLM-Generated Phishing JavaScript (Unit 42 / Palo Alto) (10:07) - NIST Focus on Securing AI Agents in Critical Infrastructure (13:44) - Tenable One AI Exposure Creators & Guests Brian Fehrman - Host Bronwen Aker - Host Click here to watch this episode on YouTube. ----------------------------------------------------------------------------------------------About Joff Thyer - https://www.blackhillsinfosec.com/team/joff-thyer/About Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/About Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/About Bronwen Aker - https://www.blackhillsinfosec.com/team/bronwen-aker/About Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/Brought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript. | 18m 08s | ||||||
| 2/5/26 | ![]() Questions From the Community | Episode 38 | Click here to watch this episode on YouTube. Creators & Guests Brian Fehrman - Host Joff Thyer - Host Derek Banks - Host Brought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com Click here to view the episode transcript. | 16m 35s | ||||||
| 1/30/26 | ![]() A.I. Frameworks and Databases | Episode 37 | In Episode 37 of AI Security Ops, the team breaks down the most important AI security frameworks and vulnerability databases used to track risks in machine learning and large language models. The discussion covers emerging AI vulnerability databases, the OWASP Top 10 for LLMs, CVE challenges, and frameworks like MITRE ATLAS, highlighting why standardizing AI threats is still difficult. This episode is a practical guide for security professionals looking to stay ahead of AI vulnerabilities, attack techniques, and defensive resources in a fast-moving landscape.Chapters(00:00) - Episode 37 – AI Frameworks & Databases (01:39) - A.I. vulnerability tracking is still young (02:44) - Should A.I. get its own vulnerability database? (07:33) - The benefit of multiple vulnerability databases (15:58) - The what is the definition of a vulnerability? (17:54) - Final Thoughts Brought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com | 18m 50s | ||||||
| 1/22/26 | ![]() AI News Stories | Episode 36 | This week on AI Security Ops, the team breaks down how attackers are weaponizing AI and the tools around it: a critical n8n zero-day that can lead to unauthenticated remote code execution, prompt-injection “zombie agent” risks tied to ChatGPT memory, a zero-click-style indirect prompt injection scenario via email/URLs, and malicious Chrome extensions caught siphoning ChatGPT/DeepSeek chats at scale. They close with a reminder that the tactics are often “same old security problems,” just amplified by AI—so lock down orchestration, limit browser extensions, and keep sensitive data out of chat tools.Key stories discussed1) n8n (“n-eight-n”) zero-day → unauthenticated RCE riskhttps://thehackernews.com/2026/01/critical-n8n-vulnerability-cvss-100.htmlThe hosts discuss a critical flaw in the n8n workflow automation platform where a workflow-parsing HTTP endpoint can be abused (via a crafted JSON payload) to achieve remote code execution as the n8n service account. Because automation/orchestration platforms often have broad internal access, one compromise can cascade quickly across an organization’s automation layer. ai-news-stories-episode-36Practical takeaway: don’t expose orchestration platforms directly to the internet; restrict who/what can talk to them; treat these “glue” systems as high-impact targets and assess them like any other production system. ai-news-stories-episode-362) “Zombie agent” prompt injection via ChatGPT Memoryhttps://www.darkreading.com/endpoint-security/chatgpt-memory-feature-prompt-injectionThe team talks about research describing an exploit that stores malicious instructions in long-term memory, then later triggers them with a benign prompt—leading to potential data leakage or unsafe tool actions if the model has integrations. The discussion frames this as “stored XSS vibes,” but harder to solve because the “feature” (following instructions/context) is also the root problem. ai-news-stories-episode-36User-side mitigation themes: consider disabling memory, keep chats cleaned up, and avoid putting sensitive data into chat tools—especially when agents/tools are involved. ai-news-stories-episode-363) “Zero-click” agentic abuse via crafted email/URL (indirect prompt injection)https://www.infosecurity-magazine.com/news/new-zeroclick-attack-chatgpt/Another story describes a crafted URL delivered via email that could trigger an agentic workflow (e.g., email summarization / agent actions) to export chat logs without explicit user interaction. The hosts largely interpret this as indirect prompt injection—a pattern they expect to keep seeing as assistants gain more connectivity. ai-news-stories-episode-36Key point: even if the exact implementation varies, auto-processing untrusted content (like email) is a persistent risk when the model can take actions or access history. ai-news-stories-episode-364) Malicious Chrome extensions stealing ChatGPT/DeepSeek chats (900k users)https://thehackernews.com/2026/01/two-chrome-extensions-caught-stealing.htmlTwo Chrome extensions posing as AI productivity tools reportedly injected JavaScript into AI web UIs, scraped chat text from the DOM, and exfiltrated it—highlighting ongoing extension supply-chain risk and the reality that “approved store” doesn’t mean safe. ai-news-stories-episode-36Advice echoed: minimize extensions, separate browsers/profiles for sensitive activities, and treat “AI sidebar” tools with extra skepticism. ai-news-stories-episode-365) APT28 credential phishing updated with AI-written lureshttps://thehackernews.com/2026/01/russian-apt28-runs-credential-stealing.htmlThe closing story is a familiar APT pattern—phishing emails with malicious Office docs leading to PowerShell loaders and credential theft—except the lure text is AI-generated, making it more consistent/convincing (and harder for users to spot via grammar/tone). ai-news-stories-episode-36The conversation stresses that “don’t click links” guidance is oversimplified; verification and layered controls matter (e.g., disabling macros org-wide). ai-news-stories-episode-36Chapter Timestamps(00:00) - Intro & Sponsors (01:16) - 1) n8n zero-day → unauthenticated RCE (09:00) - 2) “Zombie agent” prompt injection via ChatGPT Memory (19:52) - 3) “Zero-click” style agent abuse via crafted email/URL (indirect prompt injection) (23:41) - 4) Malicious Chrome extensions stealing ChatGPT/DeepSeek chats (~900k users) (29:59) - 5) APT28 phishing refreshed with AI-written lures (34:15) - Closing thoughts: “AI genie is out of the bottle” + safety reminders Click here to watch a video of this episode. Creators & Guests Brian Fehrman - Host Bronwen Aker - Host Derek Banks - Host Brought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com🔗 Register for FREE Infosec Webcasts, Anti-casts & Summitshttps://poweredbybhis.com | 35m 16s | ||||||
| 1/8/26 | ![]() 2026 Predictions | Episode 35 | AI Security Ops | Episode 35 – 2026 PredictionsIn this episode, the BHIS panel looks into the crystal ball and shares bold predictions for AI in 2026—from energy constraints and drug development breakthroughs to agentic AI risks and cybersecurity threats.Chapters(00:00) - Intro & Sponsor Shoutouts (01:14) - Prediction: Grid Power Becomes the Bottleneck (10:27) - Prediction: FDA Qualifies AI Drug Development Tools (15:45) - Prediction: Nation-State Threat Actors Weaponize AI (17:33) - Prediction: Agentic AI Dominates App Development (23:07) - Closing Thoughts: Jobs, Risk & Opportunity 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comBrought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/ | 24m 50s | ||||||
| 12/24/25 | ![]() AI Security Ops - Why Did We Create This Podcast? | Podcast Trailer | Join the 5,000+ cybersecurity professionals on our BHIS Discord server to ask questions and share your knowledge about AI Security. https://discord.gg/bhisAI Security Ops | Episode 34 – Why Did We Create This Podcast?In this episode, the BHIS team explains the purpose behind AI Security Ops, what you can expect from future episodes, and why this show matters for anyone at the intersection of AI and cybersecurity.Chapters(00:00) - Intro & Welcome (00:13) - Why We Started AI Security Ops (00:41) - Our Mission: Stay Informed & Ahead (00:56) - What We Cover: AI News & Insights (01:23) - Community Q&A & Real-World Scenarios (02:18) - Special Guests & Industry Leaders (02:41) - Demos, How-Tos & Practical Tips (03:07) - Who Should Listen & Why Subscribe (03:34) - Join the Conversation & Closing 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comBrought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com | 3m 53s | ||||||
| 12/18/25 | ![]() Community Q&A on AI Security | Episode 34 | Community Q&A on AI Security | Episode 34In this episode of BHIS Presents: AI Security Ops, our panel tackles real questions from the community about AI, hallucinations, privacy, and practical use cases. From limiting model hallucinations to understanding memory features and explaining AI to non-technical audiences, we dive into the nuances of large language models and their role in cybersecurity.We break down:Why LLMs sometimes “make stuff up” and how to reduce hallucinationsThe role of prompts, temperature, and RAG databases in accuracyPrompting best practices and reasoning modes for better resultsLegal liability: Can you sue ChatGPT for bad advice?Memory features, data retention, and privacy trade-offsSecurity paranoia: AI apps, trust, and enterprise vs free accountsPractical examples like customizing AI for writing styleHow to explain AI to your mom (or any non-technical audience)Why AI isn’t magic—just math and advanced auto-completeWhether you’re deploying AI tools or just curious about the hype, this episode will help you understand the realities of AI in security and how to use it responsibly.Chapters(00:00) - Welcome & Sponsor Shoutouts (00:50) - Episode Overview: Community Q&A (01:19) - Q1: Will ChatGPT Make Stuff Up? (07:50) - Q2: Can Lawyers Sue ChatGPT for False Cases? (11:15) - Q3: How Can AI Improve Without Ingesting Everything? (22:04) - Q4: How Do You Explain AI to Non-Technical People? (28:00) - Closing Remarks & Training Plug Brought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.com----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/ | 28m 28s | ||||||
| 12/11/25 | ![]() AI News Stories | Episode 33 | 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comAI News | Episode 33In this episode of BHIS Presents: AI Security Ops, the panel dives into the latest developments shaping the AI security landscape. From the first documented AI-orchestrated cyber-espionage campaign to polymorphic malware powered by Gemini, we explore how agentic AI, insecure infrastructure, and old-school mistakes are creating a fragile new attack surface.We break down:AI-driven cyber espionage: Anthropic disrupts a state-sponsored campaign using autonomous Black-hat LLMs: KawaiiGPT democratizes offensive capabilities for script kiddies.Critical RCEs in AI stacks: ShadowMQ vulnerabilities hit Meta, NVIDIA, Microsoft, and more.Amazon’s private AI bug bounty: Nova models under the microscope.Google Antigravity IDE popped in 24 hours: Persistent code execution flaw.PROMPTFLUX malware: Polymorphic VBScript leveraging Gemini for hourly rewrites.Whether you’re defending enterprise AI deployments or building secure agentic tools, this episode will help you understand the emerging risks and what you can do to stay ahead.⏱️ Chapters(00:00) - Intro & Sponsor Shoutouts (01:27) - AI-Orchestrated Cyber Espionage (Anthropic) (08:10) - ShadowMQ: Critical RCE in AI Inference Engines (09:54) - KawaiiGPT: Free Black-Hat LLM (22:45) - Amazon Nova: Private AI Bug Bounty (26:38) - Google Antigravity IDE Hacked in 24 Hours (31:36) - PROMPTFLUX: Malware Using Gemini for Polymorphism 🔗 LinksAI-Orchestrated Cyber Espionage (Anthropic)ShadowMQ: Critical RCE in AI Inference EnginesKawaiiGPT: Free Black-Hat LLMAmazon Nova: Private AI Bug BountyGoogle Antigravity IDE Hacked in 24 HoursPROMPTFLUX: Malware Using Gemini for Polymorphism#AISecurity #Cybersecurity #BHIS #LLMSecurity #AIThreats #AgenticAI #BugBounty #malwareBrought to you by Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/ | 37m 13s | ||||||
| 12/4/25 | ![]() Model Evasion Attacks | Episode 32 | 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comModel Evasion Attacks | Episode 32In this episode of BHIS Presents: AI Security Ops, the panel explores the stealthy world of model evasion attacks, where adversaries manipulate inputs to trick AI classifiers into misclassifying malicious activity as benign. From image classifiers to malware detection and even LLM-based systems, learn how attackers exploit decision boundaries and why this matters for cybersecurity.We break down:- What model evasion attacks are and how they differ from data poisoning- How attackers tweak features to bypass classifiers (images, phishing, malware)- Real-world tactics like model extraction and trial-and-error evasion- Why non-determinism in AI models makes evasion harder to predict- Advanced threats: model theft, ablation, and adversarial AI- Defensive strategies: adversarial training, API throttling, and realistic expectations- Future outlook: regulatory trends, transparency, and the ongoing arms raceWhether you’re deploying EDR solutions or fine-tuning AI models, this episode will help you understand why evasion is an enduring challenge, and what you can do to defend against it.#AISecurity #ModelEvasion #Cybersecurity #BHIS #LLMSecurity #aithreatsBrought to you by Black Hills Information Security https://www.blackhillsinfosec.com----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/ (00:00) - Intro & Sponsor Shoutouts (01:19) - What Are Model Evasion Attacks? (03:58) - Image Classifiers & Pixel Tweaks (07:01) - Malware Classification & Decision Boundaries (10:02) - Model Theft & Extraction Attacks (13:16) - Non-Determinism & Myth Busting (16:07) - AI in Offensive Capabilities (17:36) - Defensive Strategies & Adversarial Training (20:54) - Vendor Questions & Transparency (23:22) - Future Outlook & Regulatory Trends (25:54) - Panel Takeaways & Closing Thoughts | 28m 32s | ||||||
| 11/27/25 | ![]() Data Poisoning | Episode 31 | 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comData Poisoning Attacks | Episode 31In this episode of BHIS Presents: AI Security Ops, the panel dives into the hidden danger of data poisoning – where attackers corrupt the data that trains your AI models, leading to unpredictable and often harmful behavior. From classifiers to LLMs, discover why poisoned data can undermine security, accuracy, and trust in AI systems.We break down:What data poisoning is and why it mattersHow attackers inject malicious samples or flip labels in training setsThe role of open-source repositories like Hugging Face in supply chain riskNew twists for LLMs: poisoning via reinforcement feedback and RAGReal-world concerns like bias in ChatGPT and malicious model uploadsDefensive strategies: governance, provenance, versioning, and security assessmentsWhether you’re building classifiers or fine-tuning LLMs, this episode will help you understand how poisoned data sneaks in, and what you can do to prevent it. Treat your AI like a “drunk intern”: verify everything.#aisecurity #DataPoisoning #Cybersecurity #BHIS #llmsecurity #aithreatsBrought to you by Black Hills Information Security https://www.blackhillsinfosec.com----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/ (00:00) - Intro & Sponsor Shoutouts (01:19) - What Is Data Poisoning? (03:58) - Poisoning Classifier Models (08:10) - Risks in Open-Source Data Sets (12:30) - LLM-Specific Poisoning Vectors (17:04) - RAG and Context Injection (21:25) - Realistic Threats & Examples (25:48) - Defensive Strategies & Governance (28:27) - Panel Takeaways & Closing Thoughts | 31m 20s | ||||||
| 11/20/25 | ![]() AI News Stories | Episode 30 | 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comAI News Stories | Episode 30In this episode of BHIS Presents: AI Security Ops, we break down the top AI cybersecurity news and trends from November 2025. Our panel covers rising public awareness of AI, the security risks of local LLMs, emerging AI-driven threats, and what these developments mean for security teams. Whether you work in cybersecurity, AI security, or incident response, this episode helps you stay ahead of evolving AI-powered attacks and defenses.Topics Covered:Only 5% of Americans are unaware of AI?What Pew Research reveals about AI’s penetration into everyday life and workplace usage.AI’s Shift to the Intimacy Economy – Project Libertyhttps://email.projectliberty.io/ais-shift-to-the-intimacy-economy-1 Amazon to Cut Jobs and Invest in AI Infrastructure14,000 corporate roles eliminated—are layoffs really about efficiency or something else?Amazon to Cut Jobs & Invest in AI – DWhttps://www.dw.com/en/amazon-to-cut-14000-corporate-jobs-amid-ai-investment/a-74524365Local Models Less Secure than Cloud Providers?Why quantization and lack of guardrails make local LLMs more vulnerable to prompt injection and insecure code.Local LLMs Security Paradox – Quesmahttps://quesma.com/blog/local-llms-security-paradox Whether you're a red teamer, SOC analyst, or just trying to stay ahead of AI threats, this episode delivers sharp insights and practical takeaways.Brought to you by Black Hills Information Security https://www.blackhillsinfosec.com----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/ (00:00) - Intro & Sponsor Shoutouts (01:07) - AI’s Shift to the Intimacy Economy (Pew Research) (19:40) - Amazon Layoffs & AI Investment (27:00) - Local LLM Security Paradox (36:32) - Wrap-Up & Key Takeaways | 37m 05s | ||||||
| 11/13/25 | ![]() A Conversation with Dr. Colin Shea-Blymyer | Episode 29 | 🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comA Conversation with Dr. Colin Shea-Blymyer | Episode 29In this episode of BHIS Presents: AI Security Ops, the panel welcomes Dr. Colin Shea-Blymyer for a deep dive into the intersection of AI governance, cybersecurity, and red teaming. From the historical roots of neural networks to today’s regulatory patchwork, we explore how policy, security, and innovation collide in the age of AI. Expect candid insights on emerging risks, open models, and why defining your risk appetite matters more than ever.Topics Covered:AI governance vs. innovation: U.S. vs. EU regulatory approachesThe evolution of neural networks and lessons from AI historyAI red teaming: definitions, methodologies, and data-sharing challengesSafety vs. security: where they overlap and divergeEmerging risks: supply chain vulnerabilities, prompt injection, and poisoned dataOpen weights vs. closed models: implications for research and securityPractical takeaways for organizations navigating AI uncertaintyAbout the Panel:Joff Thyer, Dr. Brian Fehrman, Derek BanksGuest Panelist: Dr. Colin Shea-Blymyerhttps://cset.georgetown.edu/staff/colin-shea-blymyer/#aisecurity #aigovernance #cyberrisk #AIredteam #OpenModels #aipolicy #BHIS #AIthreats #aiincybersecurity #llmsecurityBrought to you by Black Hills Information Security https://www.blackhillsinfosec.com----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/ (00:00) - Intro & Guest Welcome (02:14) - Colin’s Journey: From CS to AI Governance (06:33) - Lessons from AI History & Neural Network Origins (10:28) - AI Red Teaming: Definitions & Methodologies (15:11) - Safety vs. Security: Where They Intersect (22:47) - Regulatory Landscape: U.S. Patchwork vs. EU AI Act (33:42) - Open Models Debate: Risks & Research Benefits (38:19) - Emerging Threats & Supply Chain Risks (44:06) - Practical Takeaways & Closing Thoughts | 46m 47s | ||||||
Showing 25 of 53
Sponsor Intelligence
Sign in to see which brands sponsor this podcast, their ad offers, and promo codes.
Chart Positions
5 placements across 5 markets.
Chart Positions
5 placements across 5 markets.

























