
Along The Edge Podcast: Breaking, Defending, and Understanding Agentic AI
by Andrius Useckas
Is this your podcast?Insights from recent episode analysis
Audience Interest
Podcast Focus
Publishing Consistency
Platform Reach
Insights are generated by CastFox AI using publicly available data, episode content, and proprietary models.
Total monthly reach
Estimated from 1 chart position in 1 market.
By chart position
- 🇯🇵JP · Technology#1851K to 10K
- Per-Episode Audience
Est. listeners per new episode within ~30 days
500 to 5K🎙 Weekly cadence·5 episodes·Last published 3d ago - Monthly Reach
Unique listeners across all episodes (30 days)
1K to 10K🇯🇵100% - Active Followers
Loyal subscribers who consistently listen
400 to 4K
Market Insights
Platform Distribution
Reach across major podcast platforms, updated hourly
Total Followers
—
Total Plays
—
Total Reviews
—
* Data sourced directly from platform APIs and aggregated hourly across all major podcast directories.
On the show
Recent episodes
Along The Edge e6: The 90% problem in AI security with Yair Finzi from Kanopy Security
May 12, 2026
53m 11s
Along The Edge e5 - Vibe Coding Is Replacing Your Favorite SaaS
Mar 27, 2026
10m 47s
Along The Edge e4: OpenClaw Enterprise Security, AI Robotics Vulnerabilities & The Prompt Injection Epidemic
Feb 24, 2026
48m 15s
Along The Edge e3: Breaking AI Agents: From Jailbreaks to MCP Exploits with Javi Rivera
Feb 13, 2026
56m 01s
Along The Edge e2: OpenClaw Is Incredible... and Completely Unhinged
Jan 30, 2026
45m 07s
Social Links & Contact
Official channels & resources
Official Website
Login
RSS Feed
Login
| Date | Episode | Description | Length | |
|---|---|---|---|---|
| 5/12/26 | ![]() Along The Edge e6: The 90% problem in AI security with Yair Finzi from Kanopy Security | 90% of your AI security problem isn't the AI.Yair Finzi, CEO of Kanopy Security, joins Along The Edge to expose the attack surface no one's naming: thousands of agents and apps your business users are quietly shipping every week. HR portals leaking salaries. Unscoped connectors in production. 130,000 resources at a single customer.Then the contrarian close: why better AI code makes the security problem worse, not better.Shadow engineering is already running inside your enterprise. You just haven't looked for it yet. | 53m 11s | |
| 3/27/26 | ![]() Along The Edge e5 - Vibe Coding Is Replacing Your Favorite SaaS | What happens when a developer can rebuild your $500/month software in a day?In this episode, Andrius breaks down the growing threat vibe coding poses to the SaaS industry — and why some software is more vulnerable than you think. He's joined by ZioSec front-end developer Nolan Braman, who did exactly that — ripping out a knowledge base platform charging $500/month and replacing it with a vibe coded solution in about a day.But not all SaaS is equally at risk. Andrius and Nolan dig into what gives certain platforms a deeper moat — things like heavy infrastructure, complex integrations, and operational overhead that make them far harder to replicate with a weekend project. Think Intercom vs. a simple dashboard tool. One is a vibe coding target. The other? Not so much.If you're building SaaS, buying SaaS, or thinking about vibe coding your way out of a subscription — this one's for you. | 10m 47s | |
| 2/24/26 | ![]() Along The Edge e4: OpenClaw Enterprise Security, AI Robotics Vulnerabilities & The Prompt Injection Epidemic | In this episode, host Andrius Useckas is joined by Aaron Walls and Alex Gatz to break down the explosive growth of Open Claw in enterprise environments — and the security nightmares that come with it. Plus, a special conversation with Isaac Qureshi, Co-Founder & CEO of Gatlin Robotics, on what happens when AI agents meet the physical world.Topics covered:🔒 Enterprise Open Claw Adoption — With 22% of enterprises already running Open Claw (often without IT's knowledge) and 40,000+ exposed instances, the team digs into why banning it doesn't work and what CISOs should actually do about it.🛡️ Iron Claw & Secure Alternatives — Aaron shares his hands-on experience with Iron Claw's web assembly sandboxing approach. The verdict? More secure by design, but so restrictive it loses what makes Open Claw useful in the first place.💉 Prompt Injection Epidemic — HackerOne reports a 540% increase in prompt injection attacks in 2025, with only 26% getting mitigated. The group debates whether model providers even have incentive to fix this — and whether regulation will force their hand.⚖️ Regulation vs. Innovation — From the EU AI Act to Colorado's failed legislation and NIST's open calls for comment, the team discusses why compliance frameworks (PCI, HIPAA) haven't caught up and whether early regulation kills innovation.🤖 Robotics + AI Agents (feat. Isaac Qureshi) — Isaac walks through Gatlin Robotics' approach to building cleaning robots with human-in-the-loop AI, the real risks of prompt injection via physical inputs (like writing on a whiteboard), and why maintaining a "knowledge gap" between human and AI is critical.🧑💻 AI Agents Hiring Humans — The dystopian-sounding but very real marketplace where Open Claw agents can task humans to complete physical-world actions. TaskRabbit, but your boss is an AI.🔮 Where Robotics + Agents Are Headed — From Pico Claw on Raspberry Pi to humanoid fleet systems, the conversation closes on how fast this space is moving and why security can't afford to be an afterthought.🎙️ Along The Edge — AI security topics that matter, from the people working on the front lines. | 48m 15s | |
| 2/13/26 | ![]() Along The Edge e3: Breaking AI Agents: From Jailbreaks to MCP Exploits with Javi Rivera | Along the Edge — Episode 3How do you break an AI agent? Javi Rivera — AI security researcher at ZioSec with 8+ years of offensive security experience from MITRE to ThreatX — breaks down the real-world techniques attackers use against agentic AI systems.In this episode, we cover:• Jailbreaks vs. prompt injections — what's the actual difference and why it matters• Why classic attacks still work — SQL injection, command injection, and XSS through AI agents as a "middleman"• System prompt extraction — how attackers use leaked instructions to craft targeted exploits• MCP server security — why public MCP catalogs are the new supply chain risk and why there's no good solution yet• Validating real findings vs. hallucinations — the hardest problem in AI pentesting• Live demo — Gray Swan arena walkthrough showing indirect prompt injection in action• Defense strategies — least privilege, sandboxing, guardrails, and why defense in depth still applies• The coming threat — nation-state AI agents, automated offensive tooling, and why the next wave of attacks will be unprecedentedWhether you're a red teamer, AI developer, or security leader deploying agentic AI — this is the technical deep dive you need. Resources mentioned: Gray Swan AI Arena, HackerPrompt, NVIDIA NeMo Guardrails, Docker MCP Hub | 56m 01s | |
| 1/30/26 | ![]() Along The Edge e2: OpenClaw Is Incredible... and Completely Unhinged | OpenClaw (formerly Clawdbot / Moltbot / whatever it’s called today) is the first agent that feels like “Siri, but real” — and it’s moving so fast it’s breaking everyone’s threat models in real time.In this episode of Along The Edge, we unpack why OpenClaw is blowing up, what it can do when you hook it into your email, calendar, code, and tools… and why the security tradeoff is brutal: the more capable it is, the more dangerous it becomes.We cover:Why “credentials in cleartext” is just the beginningHow Discord / chat integrations can leak gateway + session detailsTool invocation endpoints and bypass pathsMCP prompt injection turning “normal workflow” into command executionWhat attackers will fingerprint and scan for in the wildWhat CISOs should do on day 1The big question: can defense keep up, or do we go “offense-driven defense”?Buckle up. | 45m 07s | |
| 1/13/26 | ![]() Along The Edge e1: Agentic AI Security, Jailbreaks, and Why You Shouldn’t Trust Your Agents | Welcome to Along The Edge, a podcast about AI security and agentic AI.In Episode 1, Andrius Useckas (Co-founder & CTO, ZioSec) sits down with Alex Gatz (Staff Security Architect, ZioSec) to break down the emerging world of agentic AI security: jailbreaks, prompt injection, SDR and SOC agents, data leaks, least privilege, and why “don’t worry, the model will filter it” is a dangerous assumption.They also walk through V-HACK, an intentionally vulnerable agentic lab project that lets security researchers and pentesters safely experiment with agent exploits, tool calling, jailbreaks, and attack paths—helping define what “pen tester 2.0” looks like.Chapters / In this episode:00:00 – Intro: who we are & why a new AI security podcast02:00 – What is agentic AI vs a plain LLM?03:10 – SDR agents, SOC workflows & new “Layer 8 / Layer 9” problems09:00 – Prompt injection 101: direct vs indirect attacks & context windows12:00 – Chatbots vs agents and why agent risk is higher15:00 – Foundation model trust & the Anthropic horror-story jailbreak demo19:30 – Why jailbreaks are (currently) an unsolved problem22:30 – Social engineering parallels & detecting AI / agentic attacks27:00 – V-HACK: intentionally vulnerable agent lab for pentesters32:00 – Securing agents: WAFs, runtime protection, identity & MCP proxies36:00 – Scanners, evals vs real pentesting & terrifying token bills39:00 – Least privilege, DLP & identity for SDR and payroll-style agents44:00 – “Don’t trust, verify”: threat modeling & testing agents early46:00 – Future of AI security: consolidation, CNAPs & SOC-as-an-agent49:00 – Magic wand: fixing context & memory in agents50:30 – Closing thoughts & what’s nextLinks mentioned:ZioSec – www.ziosec.comV-HACK (GitHub) – https://github.com/ZioSec/VHACKAbout the guests:Andrius Useckas has 25+ years in security and now focuses on agentic AI security, offensive testing, and red teaming for enterprise AI deployments.Alex Gatz is a Staff Security Architect at ZioSec. He has a background in emergency medicine and construction, then transitioned into AI in 2014 working on NLP, deep learning, anomaly detection, and now AI security.If you’re building or testing agents in 2026, this episode gives you a practical look at how real attack paths work, what breaks in production, and how to defend before attackers get there first. | 51m 10s |
Showing 6 of 6
Sponsor Intelligence
Sign in to see which brands sponsor this podcast, their ad offers, and promo codes.
Chart Positions
1 placement across 1 market.
Chart Positions
1 placement across 1 market.


