
Cyberside Chats: Cybersecurity Insights from the Experts
by Chatcyberside
Insights from recent episode analysis
Audience Interest
Podcast Focus
Publishing Consistency
Platform Reach
Insights are generated by CastFox AI using publicly available data, episode content, and proprietary models.
Most discussed topics
Brands & references
Est. Listeners
Based on iTunes & Spotify (publisher stats).
- Per-Episode Audience
Est. listeners per new episode within ~30 days
10,001 - 25,000 - Monthly Reach
Unique listeners across all episodes (30 days)
25,001 - 75,000 - Active Followers
Loyal subscribers who consistently listen
15,001 - 40,000
Market Insights
Platform Distribution
Reach across major podcast platforms, updated hourly
Total Followers
—
Total Plays
—
Total Reviews
—
* Data sourced directly from platform APIs and aggregated hourly across all major podcast directories.
On the show
From 11 epsHosts
Recent guests
Recent episodes
9 Seconds to Zero: Misbehaving AI
May 5, 2026
16m 37s
Security Debt: The Risk Nobody is Reporting
Apr 28, 2026
29m 09s
Claude Code Leak: What Security Leaders Need to Know About AI Coding Agents
Apr 21, 2026
16m 17s
The “Hacking Ray” Is Here: AI, Project Glasswing, and the End of Hidden Vulnerabilities
Apr 14, 2026
24m 15s
We don’t break in, we badge in
Apr 7, 2026
28m 40s
Social Links & Contact
Official channels & resources
Official Website
Login
RSS Feed
Login
| Date | Episode | Topics | Guests | Brands | Places | Keywords | Sponsor | Length | |
|---|---|---|---|---|---|---|---|---|---|
| 5/5/26 | ![]() 9 Seconds to Zero: Misbehaving AI | It took nine seconds for an AI coding agent to wipe the entire production database of PocketOS — a SaaS company serving hundreds of car rental operators across the US — along with every backup. Customers showed up Saturday morning to pick up their cars and there were no reservations on file. In this episode, Sherri Davidoff and Matt Durrin dig into the cascading security failures behind the PocketOS incident, connect it to a pattern of similar AI-caused outages at Replit and Amazon AWS, and explain why the real problem isn't rogue AI — it's identity. Every one of these incidents involved an AI agent acting under an identity it shouldn't have had, or that was far too powerful. The insider risk playbook applies. We just haven't been applying it to AI. Key Takeaways 1. Treat AI agents like privileged insiders, not trusted tools. Apply your full insider risk playbook: least privilege, separation of duties, peer review, monitoring for anomalous behavior. If a human developer needs approval to push to production, so does your AI agent. The PocketOS and Kiro incidents both trace back to AI agents that were granted more trust than any new employee would get on day one. 2. Scope every credential your AI tools can reach. AI agents will find and use any token they can read — even ones created for unrelated tasks, stored in unrelated files. Audit what credentials live in your codebases and repositories. A token created for domain management should not be able to delete databases. If you wouldn't hand that token to a contractor with no supervision, don't let your AI agent have it either. 3. Enforce controls at the infrastructure layer, not the prompt layer. System prompts are advisory. The PocketOS agent had explicit rules against destructive actions — it knew them, quoted them, and violated them anyway. Confirmation requirements for destructive operations, token scoping, and peer review must live in your API layer and infrastructure, not in a paragraph of text the model is asked to obey. 4. Make sure your backups can survive a compromised identity. If your backups are accessible with the same credentials as your production systems — or stored in the same location — they are not real backups. They are a copy in the same blast radius. Test it: could an AI agent, or an attacker, with production access also wipe your recovery options? In the PocketOS incident, the answer was yes. 5. You cannot fully audit your AI vendor's safety claims. You can't penetration-test a reward signal. You can't verify that fine-tuning data isn't quietly drifting your model's behavior. The only controls you can actually rely on are the ones you own: token scoping, access controls, peer review, and monitoring. The goblin story is a reminder that even the vendor that built the model didn't see it coming. Build your defenses accordingly. Resources 1. PocketOS incident write-up by founder Jer Crane — https://x.com/lifeof_jer/status/2048103471019434248 Amazon Kiro / AWS outage reporting — https://kingy.ai/news/amazon-ai-aws-outage-kiro/ 2. Replit AI agent database deletion (Fortune) — https://fortune.com/2025/07/23/ai-coding-tool-replit-wiped-database-called-it-a-catastrophic-failure/ 3. OpenAI "Where the goblins came from" post-mortem — https://openai.com/blog/where-the-goblins-came-from 4. Guardian reporting on Amazon cloud outages and AI tools — https://www.theguardian.com/technology/2026/feb/20/amazon-cloud-outages-ai-tools-amazon-web-services-aws | 16m 37s | ||||||
| 4/28/26 | Security Debt: The Risk Nobody is Reporting✨ | security debtcybersecurity incidents+3 | — | StrykerChange Healthcare+2 | — | security debtcybersecurity+5 | — | 29m 09s | |
| 4/21/26 | Claude Code Leak: What Security Leaders Need to Know About AI Coding Agents✨ | AI coding agentssecurity risks+4 | Matt Durrin | Claude Code CLIClaw Code+1 | — | Claude Code leakAI coding tools+3 | — | 16m 17s | |
| 4/14/26 | The “Hacking Ray” Is Here: AI, Project Glasswing, and the End of Hidden Vulnerabilities✨ | AI in cybersecuritysoftware vulnerabilities+3 | Tom Pohl | Project GlasswingMythos model+1 | — | cybersecurityAI+5 | — | 24m 15s | |
| 4/7/26 | We don’t break in, we badge in✨ | social engineeringphysical security+3 | TomDerek | — | — | penetration testingsocial engineering+3 | — | 28m 40s | |
| 3/31/26 | Stryker Attack Analysis: Cybersecurity and insurance perspectives✨ | cybersecuritycyber insurance+3 | Bridget Quinn Choi | Microsoft EntraIntune+1 | — | Stryker cyberattackcyber insurance+3 | — | 35m 15s | |
| 3/24/26 | Mass Exploitation 2.0: Web Platforms Under Attack✨ | mass exploitationcybersecurity+4 | — | React2ShellLexisNexis | — | mass exploitationcybersecurity+7 | — | 23m 28s | |
| 3/17/26 | Is Anthropic a Pentagon “Supply Chain Risk”?✨ | AI ethicsnational security+3 | Matt Durrin | AnthropicPentagon+1 | — | AnthropicPentagon+5 | — | 13m 08s | |
| 3/3/26 | Google Gemini Changed the Rules: Are Your API Keys Exposed?✨ | API securitycloud governance+3 | — | GeminiGoogle+1 | — | API keysGoogle Gemini+3 | — | 12m 06s | |
| 2/24/26 | Opus 4.6: Changing the Pace of Software Exploitation Description✨ | software exploitationzero-day vulnerabilities+3 | — | Microsoft | — | zero-dayvulnerabilities+3 | — | 25m 26s | |
Want analysis for the episodes below?Free for Pro Submit a request, we'll have your selected episodes analyzed within an hour. Free, at no cost to you, for Pro users. | |||||||||
| 2/17/26 | Nancy Guthrie’s Recovered Footage: The Reality of Residual Data✨ | residual datadata retention+4 | — | FBIGoogle Nest | — | deleted datadata access+3 | — | 15m 19s | |
| 2/10/26 | Ransomware Gangs Are Teaming Up✨ | ransomwarecybersecurity+4 | — | ShinyHunters | — | ransomware gangsdata leak+3 | — | 15m 44s | |
| 2/3/26 | ![]() Top Threat of 2026: The AI Visibility and Control Gap | AI is no longer a standalone tool—it is embedded directly into productivity platforms, collaboration systems, analytics workflows, and customer-facing applications. In this special CyberSide Chats episode, Sherri Davidoff and Matt Durrin break down why lack of visibility and control over AI has emerged as the first and most pressing top threat of 2026. Using real-world examples like the EchoLeak zero-click vulnerability in Microsoft 365 Copilot, the discussion highlights how AI can inherit broad, legitimate access to enterprise data while operating outside traditional security controls. These risks often generate no alerts, no indicators of compromise, and no obvious “incident” until sensitive data has already been exposed or misused. Listeners will walk away with a practical framework for understanding where AI risk hides inside modern environments—and concrete steps security and IT teams can take to centralize AI usage, regain visibility, govern access, and apply long-standing security principles to this rapidly evolving attack surface. Key Takeaways 1. Centralize AI usage across the organization. Require a clear, centralized process for approving AI tools and enabling new AI features, including those embedded in existing SaaS platforms. 2. Gain visibility into AI access and data flows. Inventory which AI tools, agents, and features are in use, which users interact with them, and what data sources they can access or influence. 3. Restrict and govern AI usage based on data sensitivity. Align AI permissions with data classification, restrict use for regulated or highly sensitive data sets, and integrate AI considerations into vendor risk management. 4. Apply the principle of least privilege to AI systems. Treat AI like any other privileged entity by limiting access to only what is necessary and reducing blast radius if credentials or models are misused. 5. Evaluate technical controls designed for AI security. Consider emerging solutions such as AI gateways that provide enforcement, logging, and observability for prompts, responses, and model access. Resources 1. Microsoft Digital Defense Report 2025 https://www.microsoft.com/en-us/security/security-insider/threat-landscape/microsoft-digital-defense-report-2025 2. NIST AI Risk Management Framework https://www.nist.gov/itl/ai-risk-management-framework 3. Microsoft 365 Copilot Zero-Click AI Vulnerability (EchoLeak) https://www.infosecurity-magazine.com/news/microsoft-365-copilot-zeroclick-ai/ 4. Adapting to AI Risks: Essential Cybersecurity Program Updates. https://www.LMGsecurity.com/resources/adapting-to-ai-risks-essential-cybersecurity-program-updates/ 5. Microsoft on Agentic AI and Embedded Automation (2026) https://news.microsoft.com/source/2026/01/08/microsoft-propels-retail-forward-with-agentic-ai-capabilities-that-power-intelligent-automation-for-every-retail-function/ | 18m 58s | ||||||
| 1/27/26 | ![]() The Verizon Outage and the Cost of Concentration | The recent Verizon outage underscores a growing risk in today’s technology landscape: when critical services are concentrated among a small number of providers, failures don’t stay isolated. In this live discussion, we’ll connect the Verizon outage to past telecom and cloud disruptions to examine how infrastructure dependency creates cascading business impact. We’ll also explore how large-scale outages intersect with security threats targeting telecommunications, where availability, confidentiality, and integrity failures increasingly overlap. The session will close with actionable takeaways for strengthening resilience and risk planning across cybersecurity and IT programs. Key Takeaways 1. Diversify your technology infrastructure. Relying on a single carrier, cloud provider, or bundled service creates a single point of failure. Purposeful diversification across providers can reduce the impact of large-scale outages and improve overall resilience. 2. Treat outages as security incidents, not just reliability problems. Large-scale telecom and cloud outages directly disrupt authentication, monitoring, and incident response, and should trigger security workflows—not just IT troubleshooting. 3. Identify and document your dependencies on carriers and cloud providers. Many security controls rely on SMS, voice, cloud identity, or single regions; understanding these dependencies ahead of time prevents dangerous blind spots during outages. 4. Plan and test incident response without phones, SMS, or primary cloud access. Assume your normal communication and authentication methods will fail and ensure your teams know how to coordinate securely when core services are unavailable. 5. Expect outages to increase fraud and social engineering activity. Attackers exploit confusion and urgency during service disruptions, so security teams should prepare staff for impersonation and “service restoration” scams during major outages. 6. Use widespread outages as learning opportunities. Review what happened, assess how your organization was—or could have been—impacted, identify potential areas for improvement, and update incident response, communications, and resilience plans accordingly. Resources 1. Verizon official network outage update https://www.verizon.com/about/news/update-network-outage 2. Forrester: Verizon outage reignites reliability concerns https://www.forrester.com/blogs/verizon-outage-reignites-reliability-concerns/ 3. CNN: Verizon outage disrupted phone and internet service nationwide https://www.cnn.com/2026/01/15/tech/verizon-outage-phone-internet-service 4. AP News: Verizon outage disrupted calling and data services nationwide https://apnews.com/article/85d658a4fb6a6175cae8981d91a809c9 5. CNN: AT&T outage shows how dependent daily life has become on mobile networks (2024) https://www.cnn.com/2024/02/23/tech/att-outage-customer-service | 30m 45s | ||||||
| 1/20/26 | ![]() Data Is Hazardous Material: How Data Brokers Telematics and Over-Collection Are Reshaping Cyber Risk | The FTC has issued an order against General Motors for collecting and selling drivers’ precise location and behavior data, gathered every few seconds and marketed as a safety feature. That data was sold into insurance ecosystems and used to influence pricing and coverage decisions — a clear reminder that how organizations collect, retain, and share data now carries direct security, regulatory, and financial risk. In this episode of Cyberside Chats, we explain why the GM case matters to CISOs, cybersecurity leaders, and IT teams everywhere. Data proliferation doesn’t just create privacy exposure; it creates systemic risk that fuels identity abuse, authentication bypass, fake job applications, and deepfake campaigns across organizations. The message is simple: data is hazardous material, and minimizing it is now a core part of cybersecurity strategy. Key Takeaways: 1. Prioritize data inventory and mapping in 2026 You cannot assess risk, select controls, or meet regulatory obligations without knowing what data you have, where it lives, how it flows, and why it is retained. 2. Reduce data to reduce risk Data minimization is a security control that lowers breach impact, compliance burden, and long-term cost. 3. Expect that regulators care about data use, not just breaches Enforcement increasingly targets over-collection, secondary use, sharing, and retention even when no breach occurs. 4. Create and actively use a data classification policy Classification drives retention, access controls, monitoring, and protection aligned to data value and regulatory exposure. 5. Design identity and recovery assuming personal data is already compromised Build authentication and recovery flows that do not rely on the secrecy of SSNs, dates of birth, addresses, or other static personal data. 6. Train teams on data handling, not just security tools Ensure engineers, IT staff, and business teams understand what data can be collected, how long it can be retained, where it may be stored, and how it can be shared. Resources: 1. California Privacy Protection Agency — Delete Request and Opt-Out Platform (DROP) https://privacy.ca.gov/drop/ 2. FTC Press Release — FTC Takes Action Against General Motors for Sharing Drivers’ Precise Location and Driving Behavior Data https://www.ftc.gov/news-events/news/press-releases/2025/01/ftc-takes-action-against-general-motors-sharing-drivers-precise-location-driving-behavior-data 3. California Delete Act (SB 362) — Overview https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB362 4. Texas Attorney General — Data Privacy Enforcement Actions https://www.texasattorneygeneral.gov/news/releases 5. Data Breaches by Sherri Davidoff https://www.amazon.com/Data-Breaches-Opportunity-Sherri-Davidoff/dp/0134506782 | 19m 25s | ||||||
| 1/13/26 | ![]() Venezuela’s Blackout: Cybercrime Domino Effect | When Venezuela experienced widespread power and internet outages, the impact went far beyond inconvenience—it created a perfect environment for cyber exploitation. In this episode of Cyberside Chats, we use Venezuela’s disruption as a case study to show how cyber risk escalates when power, connectivity, and trusted services break down. We examine why phishing, fraud, and impersonation reliably surge after crises, how narratives around cyber-enabled disruption can trigger copycat or opportunistic attacks, and why even well-run organizations resort to risky security shortcuts when normal systems fail. We also explore how attackers weaponize emergency messaging, impersonate critical infrastructure and connectivity providers, and exploit verification failures when standard workflows are disrupted. The takeaway is simple: when infrastructure collapses, trust erodes—and cybercrime scales quickly to fill the gap. | 13m 42s | ||||||
| 1/6/26 | ![]() What the Epstein Files Teach Us About Redaction and AI | The December release of the Epstein files wasn’t just controversial—it exposed a set of security problems organizations face every day. Documents that appeared heavily redacted weren’t always properly sanitized. Some files were pulled and reissued, drawing even more attention. And as interest surged, attackers quickly stepped in, distributing malware and phishing sites disguised as “Epstein archives.” In this episode of Cyberside Chats, we use the Epstein files as a real-world case study to explore two sides of the same problem: how organizations can be confident they’re not releasing more data than intended, and how they can trust—or verify—the information they consume under pressure. We dig into redaction failures, how AI tools change the risk model, how attackers weaponize breaking news, and practical ways teams can authenticate data before reacting. | 15m 28s | ||||||
| 12/30/25 | ![]() Amazon's Warning: The New Reality of Initial Access | Amazon released two security disclosures in the same week — and together, they reveal how modern attackers are getting inside organizations without breaking in. One case involved a North Korean IT worker who entered Amazon’s environment through a third-party contractor and was detected through subtle behavioral anomalies rather than malware. The other detailed a years-long Russian state-sponsored campaign that shifted away from exploits and instead abused misconfigured edge devices and trusted infrastructure to steal and replay credentials. Together, these incidents show how nation-state attackers are increasingly blending into human and technical systems that organizations already trust — forcing defenders to rethink how initial access really happens going into 2026. Key Takeaways 1. Treat hiring and contractors as part of your attack surface. Nation-state actors are deliberately targeting IT and technical roles. Contractor onboarding, identity verification, and access scoping should be handled with the same rigor as privileged account provisioning. 2. Secure and monitor network edge devices as identity infrastructure Misconfigured edge devices have become a primary initial access vector. Inventory them, assign ownership, restrict management access, and monitor them like authentication systems — not just networking gear. 3. Enforce strong MFA everywhere credentials matter If credentials can be used without MFA, assume they will be abused. Require MFA on VPNs, edge device management interfaces, cloud consoles, SaaS admin portals, and internal administrative access. 4. Harden endpoints and validate how access actually occurs Endpoint security still matters. Harden devices and look for signs of remote control, unusual latency, or access paths that don’t match how work is normally done. 5. Shift detection from “malicious” to “out of place” The most effective attacks often look legitimate. Focus detection on behavioral mismatches — access that technically succeeds but doesn’t align with role, geography, timing, or expected workflow. Resources: 1. Amazon Threat Intelligence Identifies Russian Cyber Threat Group Targeting Western Critical Infrastructure https://aws.amazon.com/blogs/security/amazon-threat-intelligence-identifies-russian-cyber-threat-group-targeting-western-critical-infrastructure/ 2. Amazon Caught North Korean IT Worker by Tracing Keystroke Data https://www.bloomberg.com/news/newsletters/2025-12-17/amazon-caught-north-korean-it-worker-by-tracing-keystroke-data/ 3. North Korean Infiltrator Caught Working in Amazon IT Department Thanks to Keystroke Lag https://www.tomshardware.com/tech-industry/cyber-security/north-korean- infiltrator-caught-working-in-amazon-it-department-thanks-to-lag-110ms- keystroke-input-raises-red-flags-over-true-location 4. Confessions of a Laptop Farmer: How an American Helped North Korea’s Remote Worker Scheme https://www.bloomberg.com/news/articles/2023-08-23/confessions-of-a-laptop- farmer-how-an-american-helped-north-korea-s-remote-worker-scheme 5. Hiring security checklist https://www.lmgsecurity.com/resources/hiring-security-checklist/ | 15m 55s | ||||||
| 12/23/25 | ![]() AI Broke Trust: Identity Has to Step Up in 2026 | AI has supercharged phishing, deepfakes, and impersonation attacks—and 2025 proved that our trust systems aren’t built for this new reality. In this episode, Sherri and Matt break down the #1 change every security program needs in 2026: dramatically improving identity and authentication across the organization. We explore how AI blurred the lines between legitimate and malicious communication, why authentication can no longer stop at the login screen, and where organizations must start adding verification into everyday workflows—from IT support calls to executive requests and financial approvals. Plus, we discuss what “next-generation” user training looks like when employees can no longer rely on old phishing cues and must instead adopt identity-safety habits that AI can’t easily spoof. If you want to strengthen your security program for the year ahead, this is the episode to watch. Key Takeaways: Audit where internal conversations trigger action. Before adding controls, understand where trust actually matters—financial approvals, IT support, HR changes, executive requests—and treat those points as attack surfaces. Expand authentication into everyday workflows. Add verification to calls, video meetings, chats, approvals, and support interactions using known systems, codes, and out-of-band confirmation. Apply friction intentionally where mistakes are costly. Use verified communication features in collaboration platforms. Enable identity indicators, reporting features, and access restrictions in tools like Teams and Slack, and treat them as identity systems rather than just chat tools. Implement out-of-band push confirmation for high-risk requests. Authenticator-based confirmation defeats voice, video, and message impersonation because attackers rarely control multiple channels simultaneously. Move toward continuous identity validation. Identity should be reassessed as behavior and risk change, with step-up verification and session revocation for high-risk actions. Redesign training around identity safety. Teach employees how to verify people and requests, not just emails, and reward them for slowing down and confirming—even when it frustrates leadership. Tune in weekly on Tuesdays at 6:30 am ET for more cybersecurity advice, and visit www.LMGsecurity.com if you need help with cybersecurity testing, advisory services, or training. Resources: CFO.com – Deepfake CFO Scam Costs Engineering Firm $25 Million https://www.cfo.com/news/deepfake-cfo-hong-kong-25-million-fraud-cyber-crime/ Retool – MFA Isn’t MFA https://retool.com/blog/mfa-isnt-mfa Sophos MDR tracks two ransomware campaigns using “email bombing,” Microsoft Teams “vishing” https://news.sophos.com/en-us/2025/01/21/sophos-mdr-tracks-two-ransomware-campaigns-using-email-bombing-microsoft-teams-vishing/ Wired – Doxers Posing as Cops Are Tricking Big Tech Firms Into Sharing People’s Private Data https://www.wired.com/story/doxers-posing-as-cops-are-tricking-big-tech-firms-into-sharing-peoples-private-data/ LMG Security – 5 New-ish Microsoft Security Features & What They Reveal About Today’s Threats https://www.lmgsecurity.com/5-new-ish-microsoft-security-features-what-they-reveal-about-todays-threats/ | 32m 44s | ||||||
| 12/16/25 | ![]() The 5 New-ish Microsoft Security Features to Roll Out in 2026 | Microsoft is rolling out a series of new-ish security features across Microsoft 365 in 2026 — and these updates are no accident. They’re direct responses to how attackers are exploiting collaboration tools like Teams, Slack, Zoom, and Google Chat. In this episode, Sherri and Matt break down the five features that matter most, why they’re happening now, and how every organization can benefit from these lessons, even if you’re not a Microsoft shop. We explore the rise of impersonation attacks inside collaboration platforms, the security implications of AI copilots like Microsoft Copilot and Gemini, and why identity boundaries and data governance are quickly becoming foundational to modern security programs. You’ll come away with a clear understanding of what these new-ish Microsoft features signal about the evolving threat landscape — and practical steps you can take today to strengthen your security posture. Key Takeaways Treat collaboration platforms as high-risk communication channels. Attackers increasingly use Teams, Slack, Zoom, and similar tools to impersonate coworkers or support staff, and organizations should help employees verify unexpected contacts just as rigorously as they verify email. Make it easy for users to report suspicious activity. Whether or not your platform offers a built-in reporting feature like Microsoft’s suspicious-call button, employees need a simple, well-understood way to escalate strange messages or calls inside collaboration tools. Monitor external collaboration for anomalies. Microsoft’s new anomaly report highlights a growing need across all ecosystems to watch for unexpected domains, unusual activity patterns, and impersonation attempts that occur through external collaboration channels. Classify and label sensitive data before enabling AI assistants. AI tools such as Copilot, Gemini, and Slack GPT inherit user permissions and may access far more information than intended if organizations haven’t established clear sensitivity labels and access boundaries. Enforce identity and tenant boundaries to limit data leakage. Features like Tenant Restrictions v2 demonstrate the importance of restricting where users can authenticate and ensuring that corporate data stays within approved environments. Update security training to reflect collaboration-era social engineering. Modern attacks frequently occur through chat messages, impersonated vendor accounts, malicious external domains, or voice/video calls, and training must evolve beyond traditional email-focused programs. Please follow our podcast for the latest cybersecurity advice, and visit us at www.LMGsecurity.com if you need help with technical testing, cybersecurity consulting, and training! Resources Mentioned Microsoft 365: Advancing Microsoft 365 – New Capabilities and Pricing Update: https://www.microsoft.com/en-us/microsoft-365/blog/2025/12/04/advancing-microsoft-365-new-capabilities-and-pricing-update/ Microsoft 365 Roadmap – Suspicious Call Reporting (ID 536573): https://www.microsoft.com/en-us/microsoft-365/roadmap?id=536573 Check Point Research: Exploiting Trust in Microsoft Teams: https://blog.checkpoint.com/research/exploiting-trust-in-collaboration-microsoft-teams-vulnerabilities-uncovered/ Phishing Susceptibility Study (arXiv): https://arxiv.org/abs/2510.27298 LMG Security Video: Email Bombing & IT Helpdesk Spoofing Attacks—How to Stop Them: https://www.lmgsecurity.com/videos/email-bombing-it-helpdesk-spoofing-attacks-how-to-stop-them/ | 20m 49s | ||||||
| 12/9/25 | ![]() The Extension That Spied on You: Inside ShadyPanda’s 7-Year Attack | A massive 7-year espionage campaign hid in plain sight. Harmless Chrome and Edge extensions — wallpaper tools, tab managers, PDF converters — suddenly flipped into full surveillance implants, impacting more than 4.3 million users. In this episode, we break down how ShadyPanda built trust over years, then weaponized auto-updates to steal browsing history, authentication tokens, and even live session cookies. We’ll walk through the timeline, what data was stolen, why session hijacking makes this attack so dangerous, and the key steps security leaders must take now to prevent similar extension-based compromises. Key Takeaways Audit and restrict browser extensions across the organization. Inventory all extensions in use, remove unnecessary ones, and enforce an allowlist through enterprise browser controls. Treat extensions as part of your software supply chain. Extensions can flip from safe to malicious overnight. Include them in risk assessments and governance processes. Detect and mitigate session hijacking. Monitor for unusual token reuse, shorten token lifetimes where possible, and watch for logins that bypass MFA. Enforce enterprise browser security controls. Use Chrome/Edge enterprise features or MDM to lock down permissions, block unapproved installations, and enable safe browsing modes. Reduce extension sprawl with policy and training. Educate employees that extensions carry real security risk. Require justification for new installations and empower IT to remove unnecessary ones. Please tune in weekly for more cybersecurity advice, and visit www.LMGsecurity.com if you need help with your cybersecurity testing, advisory services, and training. Resources: KOI Intelligence (Original Research): https://www.koi.ai/blog/4-million-browsers-infected-inside-shadypanda-7-year-malware-campaign Malwarebytes Labs Coverage: https://www.malwarebytes.com/blog/news/2025/12/sleeper-browser-extensions-woke-up-as-spyware-on-4-million-devices Infosecurity Magazine Article: https://www.infosecurity-magazine.com/news/shadypanda-infects-43m-chrome-edge/ #ShadyPanda #browserextension #browsersecurity #cybersecurity #cyberaware #infosec #cyberattacks #ciso | 20m 58s | ||||||
| 12/2/25 | ![]() Inside Jobs: How CrowdStrike, DigitalMint & Tesla Got Burned | Insider threats are accelerating across every sector. In this episode, Sherri and Matt unpack the CrowdStrike insider leak, the two DigitalMint employees indicted for BlackCat ransomware activity, and Tesla’s multi-year insider incidents ranging from nation-state bribery to post-termination extortion. They also examine the 2025 crackdown on North Korean operatives who used stolen identities and deepfake interviews to get hired as remote workers inside U.S. companies. Together, these cases reveal how attackers are buying, recruiting, impersonating, and embedding insiders — and why organizations must rethink how they detect and manage trusted access. Key Takeaways Build a culture of ethics and make legal consequences explicit. Use real cases — Tesla, CrowdStrike, DigitalMint — to show employees that insider misconduct leads to indictments and prison time. Clear messaging, training, and leadership visibility reinforce deterrence. Enforce least-privilege access and conduct quarterly access reviews. Limit who can view or modify sensitive dashboards, admin tools, and SSO consoles. Regular recertification ensures employees only retain the permissions they legitimately need. Deploy screenshot prevention and data-leak controls across critical systems. Implement watermarking, VDI/browser isolation, screenshot detection, and DLP/CASB rules to deter and detect unauthorized capture or exfiltration of sensitive data. Strengthen identity verification for remote and distributed employees. Use periodic identity rechecks and require company-managed, attested devices for sensitive roles. Prohibit personal-device access for privileged work to reduce impersonation risk. Monitor high-risk users with behavior and anomaly analytics. Flag unusual patterns such as off-hours access, atypical data movement, sudden repository interest, or crypto-related activity on work devices. Behavioral analytics helps uncover malicious intent even when credentials appear valid. Require your vendors to follow the same insider-threat safeguards you use internally. Ensure MSPs, SaaS providers, IR partners, and software vendors enforce strong access controls, identity verification, monitoring, and device security. Vendor insiders can quickly become your insiders. Resources: TechCrunch – CrowdStrike insider leak coverage: https://techcrunch.com/2025/11/21/crowdstrike-fires-suspicious-insider-who-passed-information-to-hackers/ Reuters – DigitalMint ransomware indictment reporting: https://www.reuters.com/legal/government/us-prosecutors-say-cybersecurity-pros-ran-cybercrime-operation-2025-11-03/ BleepingComputer – North Korean fake remote worker scheme: https://www.bleepingcomputer.com/news/security/us-arrests-key-facilitator-in-north-korean-it-worker-fraud-scheme/ “Ransomware and Cyber Extortion: Response and Prevention” (Book by Sherri & Matt & Karen): https://www.amazon.com/Ransomware-Cyber-Extortion-Response-Prevention-ebook/dp/B09RV4FPP9 LMG’s Hiring Security Checklist: https://www.lmgsecurity.com/resources/hiring-security-checklist/ Want to attend a live version of Cyberside Chats? Visit us at https://www.lmgsecurity.com/lmg-resources/cyberside-chats-podcast/ to register for our next monthly live session. #insiderthreat #cybersecurity #cyberaware #cybersidechats #ransomware #ransomwareattack #crowdstrike #DigitalMint #tesla #remotework | 23m 27s | ||||||
| 11/25/25 | ![]() Made in China—Hacked Everywhere? | From routers to office cameras to employee phones and even the servers running your network, Chinese-manufactured components are everywhere—including throughout your own organization. In this live Cyberside Chats, we’ll explore how deeply these devices are embedded in modern infrastructure and what that means for cybersecurity, procurement, and third-party risk. We’ll break down new government warnings about hidden communication modules, rogue firmware, and “ghost devices” in imported tech—and how even trusted brands may ship products with risky components. Most importantly, we’ll share what you can do right now to identify exposure, strengthen procurement and third-party risk management (TPRM) processes, and protect your organization before the next breach or regulation hits. Join us live for a 25-minute deep dive plus Q&A—and find out whether your supply chain is truly secure… or “Made in China—and Hacked Everywhere.” Key Takeaways: Require an Access Bill of Materials (ABOM) for every connected device. Ask vendors to disclose all remote access paths, cloud services, SIMs/radios, update servers, and subcontractors. This is the most effective way to catch hidden modems, undocumented connectivity, or offshore control channels before procurement. Treat hardware procurement with the same rigor as software supply chain risk. Routers, cameras, inverters, and vehicles must be vetted like software: know the origin of components, how firmware is managed, and who can control or modify the device. This mindset shift prevents accidental onboarding of hidden risks. Establish and enforce a simple connected-device procurement policy. Set clear rules: no undocumented connectivity, no unmanaged remote access, no end-of-life firmware in new buys, and mandatory security review for all "smart" devices. This helps buyers avoid risky equipment even when budgets are tight. Reduce exposure through segmentation and access restrictions. Before replacing anything, isolate high-risk devices, block unnecessary outbound traffic, and disable vendor remote access. These low-cost steps significantly reduce exposure while giving you time to plan longer-term changes. Strengthen third-party risk management (TPRM) for vendors of connected equipment. Expand TPRM reviews to cover firmware integrity, logging, hosting jurisdictions, remote access practices, and subcontractors. This ensures your vendor ecosystem doesn't introduce avoidable hardware-level vulnerabilities. References: Wall Street Journal (Nov 19, 2025) – “Can Chinese-Made Buses Be Hacked? Norway Drove One Down a Mine to Find Out.” (Chinese electric bus remote-disable and SIM access findings) U.S. House Select Committee on China & House Homeland Security Committee (Sept 2024 Report) – Port Crane Security Assessment. (Unauthorized modems, supply-chain backdoors, and ZPMC risk findings) FDA & CISA (Feb–Mar 2025) – Security Advisory: Contec CMS8000 Patient Monitor. (Backdoor enabling remote file execution and hidden network communications) Anthropic (Nov 13, 2025) – “Disrupting the First Reported AI-Orchestrated Cyber Espionage Campaign.” (China-linked AI-driven intrusion playbook and campaign analysis) LMG Security (2025) – “9 Tips to Streamline Your Vendor Risk Management Program.” https://www.lmgsecurity.com/9-tips-to-streamline-your-vendor-risk-management-program #chinesehackers #cybersecurity #infosec #LMGsecurity #ciso #TPRM #thirdpartyrisk #security | 25m 46s | ||||||
| 11/18/25 | ![]() Holiday Hackers—The 2025 AI Fraud Boom | Hackers are using AI to supercharge holiday scams—flooding the web with fake ads, phishing pages, and credential-stealing bots. This season, researchers predict a record spike in automated attacks and malvertising campaigns that blur the line between human and machine. Sherri Davidoff and Matt Durrin break down what’s new this holiday season—from AI-generated phishing kits and bot-driven account takeovers to the rise of prebuilt “configs” for credential stuffing. We used WormGPT to produce a ready-to-run holiday phishing page—a proof-of-concept that demonstrates how quickly scammers can launch these attacks with evil AI tools. This episode reveals how personal habits turn into corporate risk. Before Black Friday and Christmas hit, learn what your team can do right now to protect people, passwords, and payments. Key Takeaways – How to Defend Against the 2025 AI Fraud Boom Treat holiday scams as a business risk, not just a retail problem. Automated bots, fake ads, and AI-generated phishing campaigns target your employees too — not just shoppers. Expect higher attack volume through the entire holiday season. Expect password reuse—and enforce strong MFA everywhere. Employees will reuse personal shopping passwords at work. Require MFA on all accounts — especially SSO, admin, and vendor logins — and block reused credentials where possible. Filter out malicious ads and spoofed sites. Use DNS and web filtering to block malvertising and look-alike domains. Encourage staff to verify URLs and avoid “too-good-to-be-true” promotions or charity appeals. Strengthen bot and fraud detection. Tune WAF and bot-management tools to catch automated login attempts, fake account creation, and credential stuffing. These attacks spike before Black Friday and often continue into January. Run a short holiday security awareness push before Black Friday—and repeat before Christmas. Brief all staff, especially finance and customer service, on seasonal scams: gift-card fraud, fake charities, refund and invoice scams, malvertising, and holiday-themed phishing. Remember: personal security is corporate security. BYOD, home shopping, and password reuse mean an employee’s compromise can quickly become your organization’s compromise. Keep the message simple: protect your accounts, protect your company. Don't forget to follow us for more cybersecurity advice, and visit us at www.LMGsecurity.com for tip sheets, blogs, and more advice! Resources: RH-ISAC — 2025 Holiday Season Cyber Threat Trends: https://rhisac.org/press-release/holiday-threats-2025/ (RH-ISAC) Malwarebytes — Home Depot Halloween phish gives users a fright, not a freebie: https://www.malwarebytes.com/blog/news/2025/10/home-depot-halloween-phish-gives-users-a-fright-not-a-freebie (Malwarebytes) Bitdefender Labs — Trick or Treat: Bitdefender Labs Uncovers Halloween Scams Flooding Inboxes: https://www.bitdefender.com/en-us/blog/hotforsecurity/bitdefender-labs-uncovers-halloween-scams-flooding-inboxes-and-feeds (Bitdefender) FBI / IC3 PSA — Hacker Com: Cyber Criminal Subset of The Com — background on The Com threat cluster referenced by RH-ISAC and seen in holiday fraud activity: https://www.ic3.gov/PSA/2025/PSA250723 (Internet Crime Complaint Center) Fast Company — Holiday season cybersecurity lessons: The vulnerability of the retail workforce: https://www.fastcompany.com/91270554/holiday-season-cybersecurity-lessons-the-vulnerability-of-the-retail-workforce (Fast Company) #HolidayScams #Phishing #Malvertising #Cybersecurity #Cyberaware #SMB #BlackFridayScams | 14m 07s | ||||||
| 11/11/25 | ![]() LOUVRE Was the Password?! Cybersecurity Lessons from the Heist | When thieves pulled off a lightning-fast heist at the Louvre on October 19, 2025, the world focused on the stolen jewels. But leaked audit reports soon revealed another story — one of weak passwords, legacy systems, and a decade of ignored warnings. In this episode of Cyberside Chats, Sherri Davidoff and Matt Durrin dig into the cybersecurity lessons behind the Louvre’s seven-minute robbery. They explore how outdated infrastructure, poor vendor oversight, and default credentials mirror the same risks plaguing modern organizations — from hospitals to banks. Listen as Sherri and Matt connect the dots between a world-famous museum and your own IT environment — and share practical steps to keep your organization from becoming the next headline. Key Takeaways Audit for weak and shared passwords. Regularly scan for shared, default, or vendor credentials. Replace them with strong, unique, role-based passwords and enforce MFA across administrative and vendor accounts. Conduct regular penetration tests and track remediation. Perform annual or semiannual pen tests that include internal movement and segmentation checks. Assign owners for every finding, set deadlines, and verify fixes. Vet and contractually bind third-party vendors. Require patching and OS update clauses in vendor contracts, and verify each vendor’s security practices through audits or reports such as SOC 2. Integrate IT and physical security. Coordinate teams so camera, badge, and alarm systems receive the same cybersecurity oversight as IT systems. Check for remote access exposure and outdated credentials. Plan for legacy system containment. Identify unsupported systems, isolate them on segmented networks, and add compensating controls. Build a phased replacement roadmap tied to budget and risk. Create a continuous audit and feedback loop. Assign clear ownership for all audit findings and track progress. Escalate unresolved risks to leadership to maintain visibility and accountability. Control your media communications. Limit access to sensitive reports and train staff to prevent leaks. Manage breach-related communications strategically to protect reputation and trust. Don't forget to follow us for weekly expert cybersecurity insights on today's threats. Resources Libération / CheckNews – “Louvre as a password, outdated software, impossible updates…” (Nov. 1, 2025) CNET – “You probably have a better password than the Louvre did — learn from its mistake.” (Nov. 2025) YouTube – Hank Green interviews Sherri Davidoff on the Louvre Heist LMG Security – “How Hackers Turned Cameras into Crypto Miners” (Scientific American) #louvreheist #cybersecurity #cyberaware #password #infosec #ciso | 17m 53s | ||||||
Showing 25 of 69
Sponsor Intelligence
Sign in to see which brands sponsor this podcast, their ad offers, and promo codes.
Chart Positions
1 placement across 1 market.
Chart Positions
1 placement across 1 market.
