
Insights from recent episode analysis
Audience Interest
Podcast Focus
Publishing Consistency
Platform Reach
Insights are generated by CastFox AI using publicly available data, episode content, and proprietary models.
Est. Listeners
Based on iTunes & Spotify (publisher stats).
- Per-Episode Audience
Est. listeners per new episode within ~30 days
10,001 - 25,000 - Monthly Reach
Unique listeners across all episodes (30 days)
25,001 - 75,000 - Active Followers
Loyal subscribers who consistently listen
5,001 - 15,000
Market Insights
Platform Distribution
Reach across major podcast platforms, updated hourly
Total Followers
—
Total Plays
—
Total Reviews
—
* Data sourced directly from platform APIs and aggregated hourly across all major podcast directories.
On the show
Recent episodes
Dr. Rhonda Farrell: AI Doesn't Break Your Organization. It Reveals It.
Apr 23, 2026
Unknown duration
Navigating the Complexities of AI Governance: Introducing TAIMScore™
Apr 22, 2026
Unknown duration
Distributed AI Has No Governor: The Structural Failure Behind Enterprise AI Accountability
Apr 11, 2026
Unknown duration
AI Governance Open Forum: Critical Thinking, Risk, and “Never Blindly Trust—Always Verify”
Mar 29, 2026
Unknown duration
Korean Air KC&D: Supply Chain Breach and the Data That Never Left
Mar 26, 2026
Unknown duration
Social Links & Contact
Official channels & resources
Official Website
Login
RSS Feed
Login
| Date | Episode | Description | Length | ||||||
|---|---|---|---|---|---|---|---|---|---|
| 4/23/26 | Dr. Rhonda Farrell: AI Doesn't Break Your Organization. It Reveals It. | Most organizations are not failing because they lack effort. They are failing because policy, process, people, and platforms were never designed to operate as an integrated system.AI does not fix that system. AI reveals it.In this Guest Feature, Dr. Tuboise Floyd sits down with Dr. Rhonda Farrell — United States Marine Corps veteran, ASQ Fellow, IEEE Senior Member, ISSA Distinguished Fellow, and Founder of the Cyber & STEAM Global Innovation Alliance — for a field report from inside the DoD, DHS, NSA, Department of State, and the broader intelligence community on what functional AI governance actually requires.Her thesis: AI is a forcing function. And pressure does not break solutions. It reveals them.━━━━━━━━━━━━━━━━━━━━━━━━━━🔑 KEY IDEAS━━━━━━━━━━━━━━━━━━━━━━━━━━▪ Why high-performing people keep drowning inside low-performing ecosystems ▪ The difference between performative compliance and mission-functional compliance ▪ Why policy without an execution path is just theory ▪ The four-P traceability model: Policy → Process → People → Platforms ▪ How to sequence NIST CSF, CMMC, NIST AI RMF, and TAIMScore™ as a phased maturation program ▪ The Monday Morning Move — a live exercise leaders can run tomorrow ▪ Why pressure does not break solutions. It reveals them.━━━━━━━━━━━━━━━━━━━━━━━━━━🎙 CHAPTERS━━━━━━━━━━━━━━━━━━━━━━━━━━00:00:00 — Open: The Veteran's Diagnosis00:02:00 — High-Performing People, Low-Performing Ecosystems00:05:00 — The Design Problem: Why Integration Was Never Built00:10:00 — Policy → Process → People → Platforms: The Traceability Model00:12:00 — Performative vs. Mission-Functional Compliance00:14:00 — AI as the Forcing Function00:17:00 — Sequencing NIST CSF, CMMC, NIST AI RMF, and TAIMScore™00:20:00 — Policy Without Execution Is Just Theory00:25:00 — The Design Shift: From Frameworks to Operationalization00:32:00 — The Monday Morning Move00:37:00 — Pressure Does Not Break Solutions. It Reveals Them.00:39:00 — Where to Follow Dr. Farrell's Work00:41:00 — Close: Govern the Machine or Be the Resource It Consumes━━━━━━━━━━━━━━━━━━━━━━━━━━📺 WATCH ON YOUTUBE━━━━━━━━━━━━━━━━━━━━━━━━━━https://youtu.be/ACe9px5XZZ8━━━━━━━━━━━━━━━━━━━━━━━━━━📰 COMPANION PIECES━━━━━━━━━━━━━━━━━━━━━━━━━━▶ Issue 015 of The AI Governance Record (written companion): https://theaigovernancerecord.com/record/issue-015▶ Subscribe to the newsletter: https://theaigovernancerecord.com/newsletter▶ Deep-dive blog: https://theaigovernancebriefing.com/blog━━━━━━━━━━━━━━━━━━━━━━━━━━🎟 LIVE EVENT — MAY 14, 2026━━━━━━━━━━━━━━━━━━━━━━━━━━Human Signal Town Hall — The Strict Reality of AI GovernanceDr. Rhonda Farrell is a confirmed speaker.50 seats. $97 pre-sale through April 30. $147 from May 1. Reserve: https://humansignal.io/townhall━━━━━━━━━━━━━━━━━━━━━━━━━━📄 READ THE POSITION PAPER━━━━━━━━━━━━━━━━━━━━━━━━━━The Pedagogy Problem in AI Governance — Dr. Tuboise Floyd's foundational position paper. Available on SSRN and at humansignal.io/position-paper. DOI: 10.2139/ssrn.6549178━━━━━━━━━━━━━━━━━━━━━━━━━━🎓 TAIMSCORE™ ASSESSOR WORKSHOP━━━━━━━━━━━━━━━━━━━━━━━━━━Want to measure your organization's AI governance maturity against a standard? Dr. Floyd is a TAIMScore™ Certified Assessor (HISPI). https://humansignal.io/taimscore━━━━━━━━━━━━━━━━━━━━━━━━━━📚 FAILURE FILES™━━━━━━━━━━━━━━━━━━━━━━━━━━Governance pedagogy through real AI failure analysis. Free public instrument from Human Signal, underwritten by Project Cerebellum. https://humansignal.io/failure-files━━━━━━━━━━━━━━━━━━━━━━━━━━👤 ABOUT THE GUEST━━━━━━━━━━━━━━━━━━━━━━━━━━Dr. Rhonda Farrell, DM, MS, MBA, CISSP, PMP, LSSMBB, is Executive Advisor, Transformation Strategist, and Founder & CEO of Global Innovation Strategies and the Cyber & STEAM Global Innovation Alliance. A United States Marine Corps veteran with twenty-plus years driving enterprise transformation, AI, cybersecurity, and innovation across the DoD, NSA, DHS, Department of State, and broader intelligence community. ASQ Fellow. IEEE Senior Member. ISSA Distinguished Fellow.Her Cyber & STEAM Global Innovation Alliance is building toward 10,000 partners serving one million people globally across STEAM, cyber, and innovation.LinkedIn: https://www.linkedin.com/in/rhondafarrell/ Website: https://gblinnovstratllc.com Email: CEO@gblinnovstratllc.comWatch Dr. Farrell's weekly "Spotlight on Strategy" series on YouTube: https://youtube.com/@gblinnovstrat━━━━━━━━━━━━━━━━━━━━━━━━━━🎙 ABOUT THE HOST━━━━━━━━━━━━━━━━━━━━━━━━━━Dr. Tuboise Floyd, PhD, is Founder and Chief Sensemaking Officer of Human Signal (humansignal.io), Editor in Chief of The AI Governance Record (theaigovernancerecord.com), Host of The AI Governance Briefing(theaigovernancebriefing.com), and a TAIMScore™ Certified Assessor. His doctoral work at Auburn University (2010) in adult education and systems theory underpins Human Signal's pedagogical frameworks, including the Trust Gap, GASP™, the Workflow Thesis, Noise Discipline, and the L.E.A.C. Protocol™.ORCID: 0009-0008-0055-072X━━━━━━━━━━━━━━━━━━━━━━━━━━🔗 CONNECT━━━━━━━━━━━━━━━━━━━━━━━━━━Website: https://humansignal.io LinkedIn: https://linkedin.com/in/drtuboisefloyd Brand channel: https://linkedin.com/in/theaigovernancebriefing Podcast RSS: https://feeds.captivate.fm/the-ai-governance Apple Podcasts: https://podcasts.apple.com/us/podcast/the-ai-governance-briefing/id1847777960━━━━━━━━━━━━━━━━━━━━━━━━━━Independence is not a feature. It is the product. Govern the machine. Or be the resource it consumes.#AIGovernance #AIRiskManagement #DigitalTransformation #Compliance #GRC #DrRhondaFarrell #DrTuboiseFloyd #HumanSignal #TheAIGovernanceBriefing #NISTAIRMF #TAIMScore #HISPI #USMC #DoD #NSA #EnterpriseAI #LeadershipThis podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
| 4/22/26 | Navigating the Complexities of AI Governance: Introducing TAIMScore™ | TAIMScore™ is the Trusted AI Model Score — a 20-control AI governance framework built by HISPI Project Cerebellum. In this episode of The AI Governance Briefing, Dr. Tuboise Floyd, PhD breaks down how TAIMScore™ turns AI accountability into something you can measure, score, and prove.Four governance domains. Twenty essential controls. Mapped against NIST AI RMF, the EU AI Act, HIPAA, PCI DSS, SOC 2, EU GDPR, and the White House AI Executive Order. If your institution needs a blueprint for AI governance that survives regulatory scrutiny, this is the starting point.AI is already deployed. The institutions that survive will be the ones that can prove they govern it.──────────────────────────────────────WHAT YOU WILL LEARN──────────────────────────────────────∙ Why AI incidents make governance non-negotiable∙ The Project Cerebellum mission: AI should cause no harm∙ How the four TAIM domains — GOVERN, MAP, MEASURE, MANAGE — work as an accountability cycle∙ The 20 TAIMScore™ controls every AI-deploying organization must address∙ How to crosswalk your AI posture against global regulatory frameworks∙ Why the AI kill switch is essential governance — not optional──────────────────────────────────────CHAPTERS──────────────────────────────────────0:00 Welcome and Introduction0:28 Real Risks of AI1:18 Real Generative AI Incidents2:13 Project Cerebellum: AI Should Cause No Harm2:48 Vision and Mission3:33 The Four TAIM Domains5:01 GOVERN — AI Risk Training (Govern 2.2)5:31 GOVERN — Supply Chain Policy (Govern 6.1)6:01 MAP — Establishing Context (Map 1.2)6:26 MAP — System Requirements (Map 1.6)6:55 MAP — Third Party Risk (Map 4.1)7:13 MAP — Impact Documentation (Map 5.1)7:33 MEASURE — Human Evaluations (Measure 2.2)7:51 MEASURE — Reliability (Measure 2.5)8:07 MEASURE — Safety Risk (Measure 2.6)8:23 MEASURE — Explainability (Measure 2.9)8:37 MEASURE — Privacy Risk (Measure 2.10)8:51 MEASURE — Fairness and Bias (Measure 2.11)9:09 MEASURE — Risk Tracking (Measure 3.1)9:23 MEASURE — Feedback Loops (Measure 3.3)9:41 MEASURE — Performance Data (Measure 4.3)9:57 MANAGE — Resource Allocation (Manage 2.1)10:19 MANAGE — Unknown Risks (Manage 2.3)10:35 MANAGE — The Kill Switch (Manage 2.4)10:55 MANAGE — Post-Deployment Monitoring (Manage 4.1)11:11 MANAGE — Incident Communications (Manage 4.3)11:29 TAIMScore™: The Payoff11:52 Framework Crosswalks — HIPAA, SOC 2, EU AI Act13:51 Closing and How to Get Involved──────────────────────────────────────TAIMSCORE™ ASSESSOR WORKSHOP──────────────────────────────────────Virtual. Instructor-led. One day. Six CPEs. Third Friday of every month.🔗 humansignal.io/taimscore_assessor_workshop──────────────────────────────────────FAILURE FILES™ — TAIMScore™ APPLIED──────────────────────────────────────See TAIMScore™ applied to real institutional failures:🔗 humansignal.io/failure-files──────────────────────────────────────RESOURCES──────────────────────────────────────Project Cerebellum — projectcerebellum.comHISPI — hispi.orgHISPI LinkedIn Group — linkedin.com/groups/6624427Email — projectcerebellum@hispi.org──────────────────────────────────────ABOUT HISPI PROJECT CEREBELLUM──────────────────────────────────────Project Cerebellum is the AI Governance Think Tank of HISPI — the Holistic Information Security Practitioner Institute. The Trusted AI Model (TAIM) is a flagship framework of 72 controls across four domains that harmonize leading AI governance standards into a practical scoring system. TAIMScore™ was created by Taiye Lambo, Founder and Chief Artificial Intelligence Officer of HISPI.──────────────────────────────────────ABOUT THE HOST──────────────────────────────────────Dr. Tuboise Floyd, PhD is the Founder and Chief Sensemaking Officer of Human Signal, Editor in Chief of The AI Governance Record, and a TAIMScore™ Certified Assessor. He holds a PhD from Auburn University and is a member of the HISPI Advocacy & Education Working Group (Project Cerebellum).──────────────────────────────────────CONNECT──────────────────────────────────────Website: humansignal.ioPodcast: theaigovernancebriefing.com/podcastLinkedIn: linkedin.com/in/drtuboisefloydEmail: tuboise@theaigovernancebriefing.comGovern the machine. Or be the resource it consumes.#TAIMScore #AIGovernance #AIAccountability #HISPI #ProjectCerebellum #NISTAIRMF #EUAIAct #AICompliance #FailureFiles #TrustedAIModel #DrTuboiseFloyd #HumanSignal #TheAIGovernanceBriefing #BuilderClass #AIRiskCompanies mentioned in this episode:hispiHolistic Information Security Practitioner InstituteProject CerebellumMicrosoftOpenAIISOIECHIPAAPCIDSSSoC2EU AI ActEU GDPRWhite House AI Executive OrderTakeaways:The podcast episode emphasizes the necessity for organizations to establish robust frameworks for AI governance, particularly through the TAIM model.The TAIM framework is designed to ensure that AI deployments are safe, secure, responsible, and trustworthy, addressing potential risks proactively.A significant focus of the episode is on the real-world examples of AI risks, illustrating the importance of governance in mitigating these risks.Effective AI governance requires continuous monitoring and assessment, ensuring that systems remain compliant with evolving regulatory standards.The TAIM score provides organizations with a concrete evaluation of their AI governance posture against relevant regulatory frameworks.The importance of interdisciplinary collaboration in AI governance is highlighted, underscoring the necessity of diverse perspectives in risk assessment.This podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
| 4/11/26 | Distributed AI Has No Governor: The Structural Failure Behind Enterprise AI Accountability | EPISODE SUMMARYIn this episode of The AI Governance Briefing, Dr. Tuboise Floyd delivers a pointed analysis of why enterprise AI governance is failing at the structural level. The problem isn't a lack of policy — it's that governance was designed for a world that no longer exists. Distributed AI — running across edge devices, vendor stacks, and multi-agent pipelines — has dissolved the single point of control that traditional compliance frameworks depend on.──────────────────────────────────────KEY TAKEAWAYS──────────────────────────────────────Key Takeaway 1: Distributed AI Is a Governance Condition, Not a Technology TrendThe shift to distributed AI isn't just an infrastructure evolution — it's a fundamental change in where accountability lives. When AI executes across multiple nodes, devices, or third-party systems without unified oversight, you're no longer in a governance framework. You're in a governance gap. Every edge deployment, every federated model, every multi-agent workflow is an accountability question first, a technology question second.Key Takeaway 2: The Architecture of Blame Is Predictable — and AvoidableThe pattern behind every major AI failure in recent years is the same: the vendor says the output was within spec; the integrator says the client configured the workflow; the client says legal approved the policy; legal says the policy covered the old system. Nobody owns the failure. The reason isn't bad actors — it's structural ambiguity. When no one owns the decision at the node, blame distributes as efficiently as the AI does.Key Takeaway 3: "Permitted" Is Not the Same as "Admissible"A policy that allows a model to run is not the same as governance that can see what the model is doing. This visibility gap — between what is authorized on paper and what is observable in execution — is where accountability collapses. Functional governance requires audit trails, intervention triggers, and independence from vendor contracts built into the architecture itself, not appended to it.──────────────────────────────────────DR. FLOYD'S 3 DIAGNOSTIC QUESTIONS──────────────────────────────────────1. Who owns the decision at the node — not the system, the decision? If the answer is vague, you have a gap.2. What is the escalation path? A single risk officer cannot handle fifty simultaneous failures across fifty nodes. The architecture must match the distribution.3. What accountability exists without the vendor? If your governance breaks when the vendor changes the API, you don't have governance — you have vendor dependency.──────────────────────────────────────3 REQUIREMENTS FOR FUNCTIONAL GOVERNANCE──────────────────────────────────────1. Visibility at every execution point. If you cannot see the node, you cannot govern the node.2. Accountability without humans in every loop. Humans cannot scale to distributed AI. Audit trails and intervention triggers must be designed into the system.3. Independence. The governance structure must survive vendor changes and contract terminations.──────────────────────────────────────CLOSING REFLECTION──────────────────────────────────────The winners in the AI era won't be the organizations with the best technology. They'll be the ones with the structural discipline to govern it. This week, ask yourself three things: Can you name every device where your AI is making decisions? If your vendor changed the model tonight, how long would it take you to find out? And who is responsible when failure happens inside a workflow you don't control? Architect for reality — or discover reality when the system fails.Govern the machine. Or be the resource it consumes.──────────────────────────────────────CHAPTERS──────────────────────────────────────0:00 - The Illusion of Governance0:32 - Distributed AI Outruns Policy1:10 - The Architecture of Blame1:52 - The Trust Gap Framework2:18 - Permitted ≠ Admissible2:45 - Redesigning Accountability Architecture3:28 - 3 Diagnostic Questions4:10 - What Functional Governance Actually Requires──────────────────────────────────────FRAMEWORKS REFERENCED──────────────────────────────────────→ The Trust Gap — humansignal.io/frameworks/trust-gap→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol→ Failure Files™ — humansignal.io/failure-files→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop──────────────────────────────────────ABOUT THE HOST──────────────────────────────────────Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.Independence is not a feature. It is the product.──────────────────────────────────────SUPPORT THE SHOW──────────────────────────────────────Help fuel independent AI governance research, new episodes, and the Failure Files™ series.🔗 https://theaigovernancebriefing.com/supportEvery contribution sustains the signal.──────────────────────────────────────PRODUCTION NOTES──────────────────────────────────────Host & Producer: Dr. Tuboise FloydCreative Director: Jeremy JarvisA Human Signal ProductionRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.──────────────────────────────────────CONNECT──────────────────────────────────────Website: humansignal.ioPodcast: theaigovernancebriefing.comLinkedIn: linkedin.com/in/drtuboisefloydEmail: tuboise@theaigovernancebriefing.comGeneral inquiries: hello@theaigovernancebriefing.com──────────────────────────────────────TRANSCRIPT──────────────────────────────────────Full transcript available upon request at hello@theaigovernancebriefing.com──────────────────────────────────────LEGAL──────────────────────────────────────© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.──────────────────────────────────────TAGS──────────────────────────────────────AI governance, AI accountability, distributed AI, AI policy, responsible AI, AI compliance, AI risk management, AI at the edge, federated learning, multi-agent systems, edge computing AI, AI governance framework, AI accountability gap, AI oversight, trust gap framework, AI leadership, AI regulation, AI vendor risk, governance architecture, AI decision making, AI audit trail, AI policy failure, AI governance failure, GASP framework, L.E.A.C. Protocol, Failure Files, TAIMScore, Dr. Tuboise Floyd, Human Signal, The AI Governance BriefingThis podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
| 3/29/26 | AI Governance Open Forum: Critical Thinking, Risk, and “Never Blindly Trust—Always Verify” | EPISODE DESCRIPTIONAt a laid-back campus open forum, students are invited to ask questions about AI governance to Taiye Lambo, Founder and Chief Artificial Intelligence Officer of the Holistic Information Security Practitioner Institute (HISPI), and Dr. Tuboise Floyd, Founder of Human Signal and Host of The AI Governance Briefing.Speakers frame AI literacy as a civic and professional survival skill — not a technical one. Employers now expect workers to critically evaluate AI outputs, not just use them. The conversation covers deepfakes and short-form media manipulation, the dangers of overreliance on AI (including the attorney who cited fabricated ChatGPT case law in federal court), the principle of "never blindly trust, always verify," and the structural need for continuous auditing, accountability, and an honest human in the loop — especially in clinical and environmental contexts. Students are advised to build strong domain knowledge, think critically, pursue internships, and invest in AI governance and risk certifications over tool-specific training.──────────────────────────────────────CHAPTERS──────────────────────────────────────00:00 Welcome and Setup00:52 Meet the Experts01:57 Taiye on Governance Focus02:53 Dr. Floyd Background and Podcast04:39 Open Forum Begins05:02 AI Literacy for Careers07:23 Threat or Opportunity Poll10:01 AI Literacy Beyond STEM10:49 Spotting Deepfakes in Shorts15:35 Using AI Without Replacing Learning16:14 Lawyer Case and Overtrusting AI18:08 Never Blindly Trust — Verify19:06 Wikipedia Analogy and Real Risks20:31 Business Ethics Reality Check21:06 Continuous Audits in Clinics21:28 Human in the Loop Matters22:04 Environmental AI Data Gaps23:13 Public Trust and Accountability23:33 Honest Human Oversight25:28 Tokens and Hallucinations26:51 Bias in Training Data27:56 Interviewing in the AI Era30:28 AI Disruption and Generational Shift33:21 High-Stakes AI Blind Spots36:02 Rapid Fire Career Advice41:03 Closing and Next Steps──────────────────────────────────────GUEST──────────────────────────────────────Taiye LamboFounder & Chief Artificial Intelligence OfficerHolistic Information Security Practitioner Institute (HISPI)🔗 https://www.hispi.org🔗 https://projectcerebellum.comLinkedIn: linkedin.com/in/taiyelamboTAIMScore™ Assessor Workshop🔗 https://humansignal.io/taimscore_assessor_workshop──────────────────────────────────────FRAMEWORKS REFERENCED──────────────────────────────────────→ The Trust Gap — humansignal.io/frameworks/trust-gap→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp→ Failure Files™ — humansignal.io/failure-files→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol──────────────────────────────────────KEY TAKEAWAYS──────────────────────────────────────1. AI governance is the structural discipline that makes ethical decision-making and risk mitigation possible — not a compliance checkbox.2. Employers now expect candidates to critically evaluate AI outputs. Using AI without scrutiny is a liability, not a skill.3. AI literacy is not a STEM competency. It is a professional survival skill for every sector.4. Human oversight is not optional in high-stakes AI deployments. Audit trails and intervention triggers must be designed in — not appended after failure.5. Understanding how AI systems are trained matters — especially in healthcare, law, and environmental contexts where bad data produces dangerous outputs.6. Domain knowledge, critical thinking, and governance certifications outperform tool-specific training in a market where the tools change every six months.──────────────────────────────────────SUPPORT THE SHOW──────────────────────────────────────Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.Help fuel independent AI governance research, new episodes, and the Failure Files™ series.🔗 https://theaigovernancebriefing.com/supportEvery contribution sustains the signal.──────────────────────────────────────ABOUT THE HOST──────────────────────────────────────Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.Independence is not a feature. It is the product.──────────────────────────────────────PRODUCTION NOTES──────────────────────────────────────Host & Producer: Dr. Tuboise FloydCreative Director: Jeremy JarvisA Human Signal ProductionRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.──────────────────────────────────────CONNECT──────────────────────────────────────Website: humansignal.ioPodcast: theaigovernancebriefing.comLinkedIn: linkedin.com/in/drtuboisefloydEmail: tuboise@theaigovernancebriefing.comGeneral inquiries: hello@theaigovernancebriefing.com──────────────────────────────────────TRANSCRIPT──────────────────────────────────────Full transcript available at:https://humansignal.io/blog/ai-governance-open-forum-never-blindly-trust-verify──────────────────────────────────────LEGAL──────────────────────────────────────© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Guest opinions are those of the guest alone.──────────────────────────────────────TAGS──────────────────────────────────────AI governance, AI literacy, AI accountability, AI policy, responsible AI, AI compliance, AI risk management, AI ethics, enterprise AI, government AI, technology leadership, deepfakes, AI hallucination, AI training data bias, human in the loop, AI oversight, AI career advice, AI governance certification, TAIMScore, GASP framework, Failure Files, Trust Gap, never blindly trust verify, Project Cerebellum, HISPI, Taiye Lambo, Dr. Tuboise Floyd, Human Signal, The AI Governance BriefingThis podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
| 3/26/26 | Korean Air KC&D: Supply Chain Breach and the Data That Never Left | EPISODE DESCRIPTIONIn this episode of The AI Governance Briefing, Dr. Tuboise Floyd breaks down the Korean Air / KC&D supply chain breach — a forensic autopsy of what happens when data governance doesn't travel with the data.In December 2025, Korean Air disclosed that 30,000 employee records were stolen. The breach didn't come through Korean Air's systems. It came through KC&D Service — a catering subsidiary spun off and sold to private equity in 2020. Five years later, KC&D was still holding Korean Air employee data on an unpatched Oracle ERP server. The Cl0p ransomware group exploited CVE-2025-61882 — CVSS 9.8 — and published 500GB on a dark web leak site.Six TAIMScore™ controls failed simultaneously. Three domains. All because the data moved out of sight — not out of risk.This is a Failure File™. Not a warning. A forensic record.──────────────────────────────────────KEY TOPICS──────────────────────────────────────∙ Supply chain governance and third-party vendor risk∙ What happens when a divestiture doesn't include data governance∙ The Oracle EBS zero-day and its 100+ organizational victims∙ TAIMScore™ forensic: GOVERN, MAP, and MANAGE domain failures∙ The one question every institution needs to ask today──────────────────────────────────────FRAMEWORKS REFERENCED──────────────────────────────────────→ Failure Files™ — humansignal.io/failure-files→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp→ The Trust Gap — humansignal.io/frameworks/trust-gap→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol──────────────────────────────────────SUPPORT THE SHOW──────────────────────────────────────Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.Help fuel independent AI governance research, new episodes, and the Failure Files™ series.🔗 https://theaigovernancebriefing.com/supportEvery contribution sustains the signal.──────────────────────────────────────ABOUT THE HOST──────────────────────────────────────Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.Independence is not a feature. It is the product.──────────────────────────────────────PRODUCTION NOTES──────────────────────────────────────Host & Producer: Dr. Tuboise FloydCreative Director: Jeremy JarvisA Human Signal ProductionRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.──────────────────────────────────────CONNECT──────────────────────────────────────Website: humansignal.ioPodcast: theaigovernancebriefing.comLinkedIn: linkedin.com/in/drtuboisefloydEmail: tuboise@theaigovernancebriefing.comGeneral inquiries: hello@theaigovernancebriefing.com──────────────────────────────────────TRANSCRIPT──────────────────────────────────────Full transcript available at:https://theaigovernancebriefing.com/blog──────────────────────────────────────LEGAL──────────────────────────────────────© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Case studies are based on publicly available information and presented as pedagogical tools — not legal findings or accusations of wrongdoing.──────────────────────────────────────TAGS──────────────────────────────────────AI governance, supply chain risk, third-party vendor risk, data breach, Korean Air, KC&D, Cl0p ransomware, Oracle EBS, CVE-2025-61882, TAIMScore, TAIM framework, Failure Files, institutional risk, data governance, divestiture risk, vendor oversight, AI accountability, GASP framework, Trust Gap, governance failure, Dr. Tuboise Floyd, Human Signal, The AI Governance BriefingThis podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
| 3/25/26 | AI Governance: Balancing Innovation With Risk Management | EPISODE DESCRIPTIONIn this episode of The AI Governance Briefing, Dr. Tuboise Floyd is joined by Col. Kathy Swacina (USA, Ret.), CIO of SherpaWerx, and Taiye Lambo, Founder and Chief Artificial Intelligence Officer of the Holistic Information Security Practitioner Institute (HISPI), to discuss Project Cerebellum, AI governance, and balancing innovation with risk management.The conversation cuts to the structural reality: without a holistic control layer, the race to be first with AI produces institutions that are exposed before they know it. The panel covers the evolving role of CIOs in AI oversight, what it actually means to build accountability into AI systems, and why most risk management frameworks fail at execution. This is not a theoretical discussion. These are practitioners who have governed AI in high-stakes environments.──────────────────────────────────────KEY TOPICS──────────────────────────────────────∙ Project Cerebellum and holistic AI control layers∙ The race to AI deployment vs. responsible governance∙ The evolving role of CIOs in AI oversight∙ Building accountability into AI systems — not appending it after deployment∙ Risk management frameworks that survive real humans, real incentives, and real pressure──────────────────────────────────────GUESTS──────────────────────────────────────Col. Kathy Swacina (USA, Ret.)CIO, SherpaWerxChair, HISPI AI Think Tank — Project Cerebellum🔗 https://sherpawerx.comTaiye LamboFounder & Chief Artificial Intelligence OfficerHolistic Information Security Practitioner Institute (HISPI)🔗 https://www.hispi.org🔗 https://projectcerebellum.comLinkedIn: linkedin.com/in/taiyelamboTAIMScore™ Assessor Workshop🔗 https://humansignal.io/taimscore_assessor_workshop──────────────────────────────────────FRAMEWORKS REFERENCED──────────────────────────────────────→ Failure Files™ — humansignal.io/failure-files→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp→ The Trust Gap — humansignal.io/frameworks/trust-gap→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol──────────────────────────────────────SUPPORT THE SHOW──────────────────────────────────────Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.Help fuel independent AI governance research, new episodes, and the Failure Files™ series.🔗 https://theaigovernancebriefing.com/supportEvery contribution sustains the signal.──────────────────────────────────────ABOUT THE HOST──────────────────────────────────────Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.Independence is not a feature. It is the product.──────────────────────────────────────PRODUCTION NOTES──────────────────────────────────────Host & Producer: Dr. Tuboise FloydCreative Director: Jeremy JarvisA Human Signal ProductionRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.──────────────────────────────────────CONNECT──────────────────────────────────────Website: humansignal.ioPodcast: theaigovernancebriefing.comLinkedIn: linkedin.com/in/drtuboisefloydEmail: tuboise@theaigovernancebriefing.comGeneral inquiries: hello@theaigovernancebriefing.com──────────────────────────────────────TRANSCRIPT──────────────────────────────────────Full transcript available at:https://theaigovernancebriefing.com/blog──────────────────────────────────────LEGAL──────────────────────────────────────© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Guest opinions are those of the guest alone.──────────────────────────────────────TAGS──────────────────────────────────────AI governance, risk management, AI innovation, Project Cerebellum, CIO leadership, AI accountability, AI policy, enterprise AI, government AI, technology leadership, AI oversight, AI control layer, AI ethics, TAIMScore, GASP framework, Trust Gap, Failure Files, HISPI, Col. Kathy Swacina, Taiye Lambo, Dr. Tuboise Floyd, Human Signal, The AI Governance BriefingThis podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
| 3/6/26 | Amazon Broomway: When GPS Routes a Driver Into a Tidal Death Trap | EPISODE DESCRIPTIONIn this episode of The AI Governance Briefing, Dr. Tuboise Floyd breaks down the Amazon delivery van reportedly stranded on the Broomway — one of Britain's most dangerous tidal tracks in Essex — after blindly following GPS directions toward Foulness Island. No alert. No override. No human in the loop.This isn't a story about bad technology. It's a story about ungoverned automation making context-free decisions about human movement in the physical world. And it's exactly the kind of incident the HISPI Project Cerebellum AI Incidents database exists to document — so organizations can stop repeating the same failures.──────────────────────────────────────THE INCIDENT──────────────────────────────────────An Amazon delivery van followed GPS routing onto the Broomway — a tidal road across the mudflats of the Thames Estuary that floods rapidly and without visible warning. The system had no awareness of tidal zones, flood-risk roads, or environmental danger conditions. The driver had no alert, no override prompt, and no human checkpoint between the algorithm's instruction and physical execution.The Broomway is one of the oldest roads in England, dating to the 1600s. It runs across tidal mudflats and has claimed numerous lives. It is considered one of the most dangerous roads in the United Kingdom.──────────────────────────────────────TAIMSCORE™ FAILURE ANALYSIS──────────────────────────────────────Running this incident through a TAIMScore™ lens reveals failure across three critical dimensions:❌ Safety — FAILNo guardrails for hazardous geographic areas. The routing system had no awareness of tidal zones, flood-risk roads, or environmental danger conditions. A system operating in the physical world with zero environmental context is an unacceptable safety liability.❌ Trust — FAILWhen workers discover that guidance systems can route them into danger, trust collapses — not just in that system, but in all automated guidance. The second-order effect is that workers either override systems entirely (defeating the purpose) or follow blindly (accepting the risk). Neither is acceptable.❌ Responsibility — FAILWho owns the risk when an algorithm routes a human into danger? The driver? The dispatcher? The software vendor? The organization deploying the tool? Without clear accountability architecture, no one owns it — until someone gets hurt.──────────────────────────────────────THE CORE THESIS──────────────────────────────────────The technology works exactly as designed. The governance around it does not exist.──────────────────────────────────────FRAMEWORKS REFERENCED──────────────────────────────────────→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop→ HISPI Project Cerebellum — projectcerebellum.com→ Failure Files™ — humansignal.io/failure-files→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp→ The Trust Gap — humansignal.io/frameworks/trust-gap→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol──────────────────────────────────────SUPPORT THE SHOW──────────────────────────────────────Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.Help fuel independent AI governance research, new episodes, and the Failure Files™ series.🔗 https://theaigovernancebriefing.com/supportEvery contribution sustains the signal.──────────────────────────────────────ABOUT THE HOST──────────────────────────────────────Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.Independence is not a feature. It is the product.──────────────────────────────────────PRODUCTION NOTES──────────────────────────────────────Host & Producer: Dr. Tuboise FloydCreative Director: Jeremy JarvisA Human Signal ProductionRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.──────────────────────────────────────CONNECT──────────────────────────────────────Website: humansignal.ioPodcast: theaigovernancebriefing.comLinkedIn: linkedin.com/in/drtuboisefloydEmail: tuboise@theaigovernancebriefing.comGeneral inquiries: hello@theaigovernancebriefing.com──────────────────────────────────────TRANSCRIPT──────────────────────────────────────Full transcript available upon request at hello@theaigovernancebriefing.com──────────────────────────────────────LEGAL──────────────────────────────────────© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Case studies are based on publicly available information and presented as pedagogical tools — not legal findings or accusations of wrongdoing.──────────────────────────────────────TAGS──────────────────────────────────────AI governance, ungoverned automation, GPS failure, logistics AI, AI safety, AI accountability, human in the loop, AI risk, responsible AI, AI incidents, AI ethics, TAIMScore, GASP framework, Trust Gap, Failure Files, Project Cerebellum, HISPI, physical world AI, autonomous systems, AI liability, Dr. Tuboise Floyd, Human Signal, The AI Governance BriefingThis podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
| 3/2/26 | Making Digital Accessibility Work In The AI Era | EPISODE DESCRIPTIONIn this episode of The AI Governance Briefing, Dr. Tuboise Floyd is joined by Dr. Michele A. Williams — UX and accessibility consultant and author of Accessible UX Research — to examine why digital accessibility failures across 97% of the web create equity, resilience, and trust risks that AI can magnify at scale.Dr. Williams contrasts the medical and social models of disability, addresses ableism and language (person-first vs. identity-first), and argues that checklists cannot replace lived experience or disabled participation in UX research and leadership. The conversation covers how inaccessible code tools and AI trained on inaccessible data produce compounding issues — missing labels, broken keyboard paths, poor semantic structure — and warns against disability dongles: technology solutions that add a layer instead of removing the systemic barrier. Dr. Williams closes with a practical 90-day plan: establish a baseline with scans and process mapping, change defaults, and normalize inclusion from the inside out.Nothing about us without us.──────────────────────────────────────CHAPTERS──────────────────────────────────────00:00 Accessibility Wake Up Call00:57 Meet Dr. Michele Williams02:07 Equity, Resilience, Trust04:01 Disability Mindset Shift05:59 Why Lived Experience Matters07:14 Person First vs. Identity First13:01 AI Promise and Harm15:23 Social Model In Practice19:58 Beyond Screen Readers25:02 Exclusion Inside Real Teams26:58 Semantic Code Chaos28:32 Standards Lag Tech29:12 Siri Zoom Panic31:23 Disability Dongles33:36 AI Hype Reality37:25 Beyond Checklists40:32 90 Day Baseline42:30 Change Defaults44:17 Normalize Inclusion46:47 Nothing About Us49:13 One Action This Week50:35 Closing Credits──────────────────────────────────────GUEST──────────────────────────────────────Dr. Michele A. WilliamsUX and Accessibility ConsultantAuthor, Accessible UX Research — Smashing Magazine🔗 https://mawconsultingllc.comLinkedIn: linkedin.com/in/micheleawilliams1Accessible UX ResearchPublisher: Smashing Magazine🔗 https://www.smashingmagazine.com/2025/06/accessible-ux-research-pre-release/──────────────────────────────────────WATCH ON YOUTUBE──────────────────────────────────────🎥 https://youtu.be/pxXLNsbyJhc?si=Dt9mf2HK4AtyCx6_──────────────────────────────────────KEY TAKEAWAYS──────────────────────────────────────1. 97% of the web contains accessibility barriers that actively exclude disabled individuals — this is not a niche compliance issue, it is a structural governance failure at scale.2. Accessibility is not a checklist. Genuine inclusion requires disabled participation in UX research, leadership, and product decisions from the start.3. AI trained on inaccessible data reproduces and amplifies inaccessibility. The governance problem precedes the technology problem.4. Disability dongles — technology layered on top of broken systems — are not solutions. They are evidence that the underlying barrier was never addressed.5. Organizations serious about inclusion must change defaults, not add accommodations after the fact.──────────────────────────────────────COMPANIES REFERENCED──────────────────────────────────────Smashing Magazine · Accessibe──────────────────────────────────────FRAMEWORKS REFERENCED──────────────────────────────────────→ Failure Files™ — humansignal.io/failure-files→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp→ The Trust Gap — humansignal.io/frameworks/trust-gap→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol──────────────────────────────────────SUPPORT THE SHOW──────────────────────────────────────Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.Help fuel independent AI governance research, new episodes, and the Failure Files™ series.🔗 https://theaigovernancebriefing.com/supportEvery contribution sustains the signal.──────────────────────────────────────ABOUT THE HOST──────────────────────────────────────Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.Independence is not a feature. It is the product.──────────────────────────────────────PRODUCTION NOTES──────────────────────────────────────Host & Producer: Dr. Tuboise FloydCreative Director: Jeremy JarvisA Human Signal ProductionRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.──────────────────────────────────────CONNECT──────────────────────────────────────Website: humansignal.ioPodcast: theaigovernancebriefing.comLinkedIn: linkedin.com/in/drtuboisefloydEmail: tuboise@theaigovernancebriefing.comGeneral inquiries: hello@theaigovernancebriefing.com──────────────────────────────────────TRANSCRIPT──────────────────────────────────────Full transcript available upon request at hello@theaigovernancebriefing.com──────────────────────────────────────LEGAL──────────────────────────────────────© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Guest opinions are those of the guest alone.──────────────────────────────────────TAGS──────────────────────────────────────digital accessibility, AI accessibility, UX research, accessible design, disability inclusion, web accessibility, WCAG, screen readers, semantic HTML, keyboard navigation, disability dongles, social model of disability, medical model of disability, person first language, identity first language, ableism, inclusive design, AI bias, AI training data, AI governance, equity resilience trust, 97 percent web accessibility, nothing about us without us, Dr. Michele Williams, Accessible UX Research, Smashing Magazine, Dr. Tuboise Floyd, Human Signal, The AI Governance BriefingThis podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
| 2/23/26 | Digital Accessibility In An AI World | Digital Accessibility In An AI World — 2026As a podcast host exploring the intersection of humanity and technology, I keep asking: Are we really including everyone in our digital transformation?Dr. Michele A. Williams — UX and Accessibility Consultant, author of Accessible UX Research (Smashing Magazine) — challenges us to move beyond checklists and design with, not for, disabled users.Now live on The AI Governance Briefing: Dr. Michele A. Williams joins Dr. Tuboise Floyd to break down how to make digital accessibility work in an AI world.🔗 https://mawconsultingllc.comAccessibility is not just about digital spaces. Accessibility is about fundamental human rights.What leaders and operators need to know:✓ How to spot invisible exclusion in UX research and code✓ Moving beyond compliance checklists to build truly inclusive systems✓ Using AI for captions and alt text without creating new barriers✓ The 90-day accessibility baseline your team can sustainBecause real inclusion means ensuring everyone has access to the places and systems they need — whether digital or physical.Nothing about us without us.──────────────────────────────────────PRODUCTION NOTES──────────────────────────────────────Host & Producer: Dr. Tuboise FloydCreative Director: Jeremy JarvisA Human Signal ProductionRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.──────────────────────────────────────CONNECT──────────────────────────────────────Website: humansignal.ioPodcast: theaigovernancebriefing.comLinkedIn: linkedin.com/in/drtuboisefloydEmail: tuboise@theaigovernancebriefing.com──────────────────────────────────────TRANSCRIPT──────────────────────────────────────Full transcript available upon request at hello@theaigovernancebriefing.com──────────────────────────────────────LEGAL──────────────────────────────────────© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™.──────────────────────────────────────TAGS──────────────────────────────────────#TheAIGovernanceBriefing #HumanSignal #DigitalAccessibility #ArtificialIntelligence #InclusiveDesign #UXResearch #TechLeadership #Accessibility #AIGovernance #NothingAboutUsWithoutUs #AccessibleUXResearch #DisabilityInclusionThis podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
| 2/20/26 | AI Activism For Insiders This Is Not Ethics Work | AI Activism For Insiders — This Is Not Ethics Work | 2026🧠 About Human SignalHuman Signal is an independent AI governance research and media platform. Through the L.E.A.C. Protocol™, GASP™, Noise Discipline, and the Workflow Thesis, we reverse-engineer where governance erodes under capital pressure — and where external oversight must be applied. Independence is not a feature. It is the product.🔗 humansignal.io──────────────────────────────────────PRODUCTION NOTES──────────────────────────────────────Host & Producer: Dr. Tuboise FloydCreative Director: Jeremy JarvisA Human Signal ProductionRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.──────────────────────────────────────CONNECT──────────────────────────────────────Website: humansignal.ioPodcast: theaigovernancebriefing.comLinkedIn: linkedin.com/in/drtuboisefloydEmail: tuboise@theaigovernancebriefing.comGeneral inquiries: hello@theaigovernancebriefing.com──────────────────────────────────────TRANSCRIPT──────────────────────────────────────Full transcript available upon request at hello@theaigovernancebriefing.com──────────────────────────────────────LEGAL──────────────────────────────────────© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.──────────────────────────────────────TAGS──────────────────────────────────────#TheAIGovernanceBriefing #HumanSignal #AIGovernance #AIActivism #ResponsibleAI #AIAccountability #GovernanceCollapse #NoiseDisciipline #LEACProtocol #GASP #WorkflowThesis #FrontierAI #AIPolicy #AIEthics #AIInsidersThis podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
Want analysis for the episodes below?Free for Pro Submit a request, we'll have your selected episodes analyzed within an hour. Free, at no cost to you, for Pro users. | |||||||||
| 2/20/26 | Anthropic Safeguards Chief Resigns: What Governance Collapse Looks Like From Inside | EPISODE DESCRIPTIONIn this episode of The AI Governance Briefing, Dr. Tuboise Floyd examines the resignation of Mrinank Sharma — Anthropic's head of safeguards research — on February 9, 2026, and what it reveals about what happens when billion-dollar infrastructure commitments collide with safety protocols.This is not a personnel story. It is organizational telemetry. Sharma's departure tells us everything about the gap between stated safety commitments and operational reality — and why that gap is exactly where systemic risk accumulates.──────────────────────────────────────KEY TOPICS──────────────────────────────────────The Signal, Not Just the Personnel∙ Mrinank Sharma's resignation as organizational telemetry∙ Sharma's critical research areas: reality distortion in AI chatbots, AI-assisted bioterrorism defense, and sycophancy prevention∙ Why departures from safety leadership roles are data points in governance collapse patterns — not random exitsInfrastructure Economics vs. Safety∙ The capital-intensive reality: lithography, GPUs, data centers, and energy∙ How financial models lock organizations into velocity-prioritizing postures∙ The mechanism of slow-motion governance collapseThe Public-Private Governance Gap∙ U.S. Department of Labor's AI Literacy Framework and public-side initiatives∙ The irony of raising the AI literacy floor while the ceiling cracks inside frontier labs∙ Where systemic risk accumulates in this disconnectThe L.E.A.C. Protocol™ Applied∙ How Lithography, Energy, Arbitrage, and Cooling create the capital pressure that drives governance erosion∙ Why organizations don't abandon safety — they redefine it, water it down, or sideline the people holding the line──────────────────────────────────────FRAMEWORKS REFERENCED──────────────────────────────────────→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp→ The Trust Gap — humansignal.io/frameworks/trust-gap→ Noise Discipline — humansignal.io/frameworks/noise-discipline→ Failure Files™ — humansignal.io/failure-files→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop→ Project Cerebellum — projectcerebellum.com→ U.S. Department of Labor AI Literacy Framework — https://www.dol.gov/sites/dolgov/files/ETA/advisories/TEN/2025/TEN%2007-25/TEN%2007-25%20(complete%20document).pdf──────────────────────────────────────SUPPORT THE SHOW──────────────────────────────────────Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.Help fuel independent AI governance research, new episodes, and the Failure Files™ series.🔗 https://theaigovernancebriefing.com/supportEvery contribution sustains the signal.──────────────────────────────────────ABOUT THE HOST──────────────────────────────────────Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.Independence is not a feature. It is the product.──────────────────────────────────────PRODUCTION NOTES──────────────────────────────────────Host & Producer: Dr. Tuboise FloydCreative Director: Jeremy JarvisA Human Signal ProductionRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.──────────────────────────────────────CONNECT──────────────────────────────────────Website: humansignal.ioPodcast: theaigovernancebriefing.comLinkedIn: linkedin.com/in/drtuboisefloydEmail: tuboise@theaigovernancebriefing.comGeneral inquiries: hello@theaigovernancebriefing.com──────────────────────────────────────TRANSCRIPT──────────────────────────────────────Full transcript available upon request at hello@theaigovernancebriefing.com──────────────────────────────────────LEGAL──────────────────────────────────────© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.──────────────────────────────────────TAGS──────────────────────────────────────#TheAIGovernanceBriefing #HumanSignal #AIGovernance #AIEthics #Anthropic #AISafety #AIPolicy #FrontierAI #GovernanceCollapse #AIAccountability #AIInfrastructure #LEACProtocol #GASP #TrustGap #FailureFiles #TAIMScore #ProjectCerebellum #MrinankSharma #AIResearch #NoiseDisciiplineThis podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
| 2/15/26 | AI Contracts Are Moving Faster Than Governance: The Gap Where Failures Live | EPISODE DESCRIPTIONIn this episode of The AI Governance Briefing, Dr. Tuboise Floyd asks the question every institutional operator is living right now: Is your leadership signing AI contracts faster than they're building governance?That gap is where the lawsuits, scandals, and quiet institutional failures live. It's how you wake up with a "successful AI pilot" and a mess in risk, workforce, and public trust. The governance gap is not a technology problem. It is a structural problem — and it is solvable from the inside.──────────────────────────────────────THE CORE PROBLEM──────────────────────────────────────Organizations are racing to deploy AI without the control systems, oversight mechanisms, and governance frameworks needed to manage the technology accountably. The result is a dangerous gap between what leadership promises and what operations can actually deliver. Mid-career operators are absorbing the exposure while the contracts keep moving.──────────────────────────────────────WHO THIS IS FOR──────────────────────────────────────∙ Mid-career operators inside AI-disrupted institutions∙ Federal IT leaders watching risky deployments unfold∙ University CIOs managing AI rollouts without adequate governance∙ Enterprise strategists caught between innovation pressure and risk reality∙ Policy teams trying to create guardrails after the fact──────────────────────────────────────KEY TAKEAWAY──────────────────────────────────────You don't have to wait for leadership to figure this out. Mid-career operators have the leverage to intervene, redirect, and demand better governance before the failures compound. This isn't about slowing down innovation. It's about surviving it.──────────────────────────────────────FRAMEWORKS REFERENCED──────────────────────────────────────→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp→ The Trust Gap — humansignal.io/frameworks/trust-gap→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol→ Failure Files™ — humansignal.io/failure-files→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop──────────────────────────────────────SUPPORT THE SHOW──────────────────────────────────────Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.Help fuel independent AI governance research, new episodes, and the Failure Files™ series.🔗 https://theaigovernancebriefing.com/supportEvery contribution sustains the signal.──────────────────────────────────────ABOUT THE HOST──────────────────────────────────────Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.Independence is not a feature. It is the product.──────────────────────────────────────PRODUCTION NOTES──────────────────────────────────────Host & Producer: Dr. Tuboise FloydCreative Director: Jeremy JarvisA Human Signal ProductionRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.──────────────────────────────────────CONNECT──────────────────────────────────────Website: humansignal.ioPodcast: theaigovernancebriefing.comLinkedIn: linkedin.com/in/drtuboisefloydEmail: tuboise@theaigovernancebriefing.comGeneral inquiries: hello@theaigovernancebriefing.com──────────────────────────────────────TRANSCRIPT──────────────────────────────────────Full transcript available upon request at hello@theaigovernancebriefing.com──────────────────────────────────────LEGAL──────────────────────────────────────© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.──────────────────────────────────────TAGS──────────────────────────────────────#TheAIGovernanceBriefing #HumanSignal #GovernanceGap #AIGovernance #AIContracts #RiskManagement #InstitutionalAI #FederalIT #EnterpriseAI #AIDeployment #AIOversight #GASP #TrustGap #WorkflowThesis #LEACProtocol #FailureFiles #TAIMScore #AIPolicy #AIAccountability #BuilderClassThis podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
| 2/6/26 | Noise Discipline: Social Media Destroys Strategic Focus | EPISODE DESCRIPTIONIn this episode of The AI Governance Briefing, Dr. Tuboise Floyd breaks down Noise Discipline — and why builders must treat social feeds as enemy territory.High-speed social media feeds are not neutral infrastructure. They degrade critical thinking, soften fact-checking instincts, increase cognitive stress, and produce source amnesia — the condition where you forget where ideas came from and mistake them for your own. For institutional operators making consequential decisions about AI deployment, that is not a productivity problem. It is a governance risk.──────────────────────────────────────KEY TOPICS──────────────────────────────────────∙ How social feeds degrade critical thinking capabilities∙ The real cost of source amnesia on strategic decision-making∙ Why constant scrolling softens fact-checking instincts∙ The stress cascade triggered by high-velocity information environments∙ Noise Discipline as a cognitive defense framework for operators──────────────────────────────────────THE FOUR INTERVENTIONS──────────────────────────────────────Treat feeds like radiation zones:∙ Set timers and enforce strict exposure limits∙ Question who wrote what and what they're selling∙ Skip anything that doesn't help you build∙ Reclaim your attention as a strategic assetThis isn't about productivity hacks. It's about survival in an environment designed to colonize your attention.──────────────────────────────────────FRAMEWORKS REFERENCED──────────────────────────────────────→ Noise Discipline — humansignal.io/frameworks/noise-discipline→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp→ The Trust Gap — humansignal.io/frameworks/trust-gap→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol→ Failure Files™ — humansignal.io/failure-files──────────────────────────────────────SUPPORT THE SHOW──────────────────────────────────────Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.Help fuel independent AI governance research, new episodes, and the Failure Files™ series.🔗 https://theaigovernancebriefing.com/supportEvery contribution sustains the signal.──────────────────────────────────────ABOUT THE HOST──────────────────────────────────────Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.Independence is not a feature. It is the product.──────────────────────────────────────PRODUCTION NOTES──────────────────────────────────────Host & Producer: Dr. Tuboise FloydCreative Director: Jeremy JarvisA Human Signal ProductionRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.──────────────────────────────────────CONNECT──────────────────────────────────────Website: humansignal.ioPodcast: theaigovernancebriefing.comLinkedIn: linkedin.com/in/drtuboisefloydEmail: tuboise@theaigovernancebriefing.comGeneral inquiries: hello@theaigovernancebriefing.com──────────────────────────────────────TRANSCRIPT──────────────────────────────────────Full transcript available upon request at hello@theaigovernancebriefing.com──────────────────────────────────────LEGAL──────────────────────────────────────© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.──────────────────────────────────────TAGS──────────────────────────────────────#TheAIGovernanceBriefing #HumanSignal #NoiseDiscipline #AttentionManagement #BuilderMindset #SourceAmnesia #CriticalThinking #InformationHygiene #VendorCapture #CognitiveDefense #AIGovernance #GASP #TrustGap #WorkflowThesis #LEACProtocol #StrategicFocus #DeepWork #BuilderClassThis podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
| 2/5/26 | Aging Power Grids Meet Autonomous AI: The Infrastructure Breaking Point | EPISODE DESCRIPTION🎧 When Physics Meets Agentic AI: The Infrastructure Breaking PointWe explore the collision between aging, fragile infrastructure and autonomous AI systems making real-time decisions. We discuss how climate stress, overloaded grids, and agentic AI operating without full human oversight create cascading failure risks across universities, hospitals, and federal agencies.Critical Questions Explored: What happens when autonomous AI meets 50-year-old infrastructure? How climate stress amplifies the risks of agentic systems Why overloaded power grids become AI failure multipliers The hidden vulnerability in universities, hospitals, and federal agencies What Actually Fails First?When physics meets automation, we examine three failure points: The hardware: aging infrastructure that can't keep pace Organizational structures: governance models built for slower systems Human control itself: when oversight becomes impossible at AI speed This isn't theoretical. This is the breaking point that institutions are racing toward right now.SUBSCRIBE & SUPPORTSubscribe now to lock in the feed. This isn't just content; it's a continuing briefing for the Builder Class.Support Human Signal: Help fuel six months of new episodes, visual briefs, and honest playbooks. 🔗 https://humansignal.com/supportEvery contribution sustains the signal.ABOUT THE HOSTDr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure.PRODUCTION NOTESHost & Producer: Dr. Tuboise Floyd Creative Director: Jeremy JarvisTech Specs: Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.CONNECTLinkedIn: linkedin.com/in/tuboiseEmail: tuboise@theaigovernancebriefing.comTRANSCRIPTFull transcript available upon request at hello@theaigovernancebriefing.comTAGS/KEYWORDSAgentic AI, Infrastructure Risk, Critical Infrastructure, AI Safety, Climate Stress, Power Grid Resilience, Autonomous Systems, Cascading Failures, Federal AI, Healthcare AI, University Systems, Enterprise RiskHASHTAGS#AgenticAI #InfrastructureRisk #AIGovernance #CriticalInfrastructure #ClimateRisk #HumanSignal #AutonomousSystems #EnterpriseAI #CascadingFailure #AIPolicyLEGAL© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™.This podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
| 2/4/26 | Tomorrow's War: Our Children Pay the AI Debt We're Running Now | EPISODE DESCRIPTIONIn this episode of The AI Governance Briefing, Dr. Tuboise Floyd delivers Briefing 005: Tomorrow's War — Our Children Pay Our AI Debt.Tomorrow's war won't look like soldiers fighting beasts with long teeth. It will look like our children quietly paying the bill for our decision to unleash ungoverned AI and call it "progress."We are the first generation to hand machines the keys to our attention, our labor markets, and our democracy — and then shrug at the fine print.──────────────────────────────────────THE REAL COST──────────────────────────────────────We cash the convenience and productivity now while the real cost is deferred:∙ Their mental health∙ Their privacy∙ Their ability to tell what's real∙ Their leverage in a world run by systems they never choseThis is the old story of the sins of the fathers — rewritten in code. We are loading our fear of change and our hunger for growth at any price onto our children's backs, compounding it with every unchecked model we ship.──────────────────────────────────────TOMORROW'S WAR──────────────────────────────────────The fight over that inheritance. Whether we govern AI now, or let our children spend their lives paying down a debt they never agreed to incur.This isn't about technology. It's about intergenerational accountability.──────────────────────────────────────FRAMEWORKS REFERENCED──────────────────────────────────────→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp→ The Trust Gap — humansignal.io/frameworks/trust-gap→ Noise Discipline — humansignal.io/frameworks/noise-discipline→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol→ Failure Files™ — humansignal.io/failure-files──────────────────────────────────────SUPPORT THE SHOW──────────────────────────────────────Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.Help fuel independent AI governance research, new episodes, and the Failure Files™ series.🔗 https://theaigovernancebriefing.com/supportEvery contribution sustains the signal.──────────────────────────────────────ABOUT THE HOST──────────────────────────────────────Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.Independence is not a feature. It is the product.──────────────────────────────────────PRODUCTION NOTES──────────────────────────────────────Host & Producer: Dr. Tuboise FloydCreative Director: Jeremy JarvisA Human Signal ProductionRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.──────────────────────────────────────CONNECT──────────────────────────────────────Website: humansignal.ioPodcast: theaigovernancebriefing.comLinkedIn: linkedin.com/in/drtuboisefloydEmail: tuboise@theaigovernancebriefing.comGeneral inquiries: hello@theaigovernancebriefing.com──────────────────────────────────────TRANSCRIPT──────────────────────────────────────Full transcript available upon request at hello@theaigovernancebriefing.com──────────────────────────────────────LEGAL──────────────────────────────────────© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.──────────────────────────────────────TAGS──────────────────────────────────────#TheAIGovernanceBriefing #HumanSignal #AIGovernance #AIAccountability #IntergenerationalJustice #AIDebt #TomorrowsWar #AIEthics #AIPolicy #Ungoverned AI #PrivacyRights #MentalHealth #DigitalDemocracy #GASP #TrustGap #LEACProtocol #FailureFiles #BuilderClass #GoverningAI #AIResponsibilityThis podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
| 2/4/26 | Beyond AI: Quantum Computing and Organoid Intelligence | EPISODE DESCRIPTIONIn this episode of The AI Governance Briefing, Dr. Tuboise Floyd looks beyond AI to the larger technological transformation already in motion — and the power race determining who controls it.AI was the warning shot. What comes next determines who survives and who becomes obsolete.──────────────────────────────────────EMERGING TECHNOLOGIES RESHAPING WHAT'S NEXT──────────────────────────────────────∙ Organoid intelligence: computing with biological neural tissue∙ Quantum computing: exponential leaps beyond classical limits∙ The critical energy constraints driving the race∙ Why power consumption is the bottleneck no one is talking about──────────────────────────────────────THE HIDDEN RISKS──────────────────────────────────────The danger of reducing human involvement in increasingly powerful systems does not disappear when the technology label changes. As these technologies accelerate beyond AI, the question isn't just "what can they do?" — it's "who controls them, and who pays the price when they fail?"──────────────────────────────────────WHY THIS MATTERS NOW──────────────────────────────────────Tracking where power moves and identifying human stakes in these futures is not a futurist exercise. It is institutional survival. This isn't about keeping up with tech trends. It's about understanding the inflection points before they shape your institution, your workforce, and your options.──────────────────────────────────────FRAMEWORKS REFERENCED──────────────────────────────────────→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp→ The Trust Gap — humansignal.io/frameworks/trust-gap→ Noise Discipline — humansignal.io/frameworks/noise-discipline→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis→ Failure Files™ — humansignal.io/failure-files→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop──────────────────────────────────────SUPPORT THE SHOW──────────────────────────────────────Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.Help fuel independent AI governance research, new episodes, and the Failure Files™ series.🔗 https://theaigovernancebriefing.com/supportEvery contribution sustains the signal.──────────────────────────────────────ABOUT THE HOST──────────────────────────────────────Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.Independence is not a feature. It is the product.──────────────────────────────────────PRODUCTION NOTES──────────────────────────────────────Host & Producer: Dr. Tuboise FloydCreative Director: Jeremy JarvisA Human Signal ProductionRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.──────────────────────────────────────CONNECT──────────────────────────────────────Website: humansignal.ioPodcast: theaigovernancebriefing.comLinkedIn: linkedin.com/in/drtuboisefloydEmail: tuboise@theaigovernancebriefing.comGeneral inquiries: hello@theaigovernancebriefing.com──────────────────────────────────────TRANSCRIPT──────────────────────────────────────Full transcript available upon request at hello@theaigovernancebriefing.com──────────────────────────────────────LEGAL──────────────────────────────────────© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.──────────────────────────────────────TAGS──────────────────────────────────────#TheAIGovernanceBriefing #HumanSignal #AIGovernance #BeyondAI #OrganoidIntelligence #QuantumComputing #EmergingTech #EnergyConstraints #TechTransformation #PowerDynamics #HumanStakes #TechGovernance #LEACProtocol #GASP #TrustGap #FailureFiles #BuilderClass #StrategicForesight #AIEvolution #BiocomputingThis podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
| 2/3/26 | Stop Renting AI You Can't Deploy: The Case for Sovereign Infrastructure | EPISODE DESCRIPTIONIn this episode of The AI Governance Briefing, Dr. Tuboise Floyd breaks down why enterprise boards are hemorrhaging AI transformation budgets through subscription models and cloud credits — without building the foundational infrastructure needed to support them.──────────────────────────────────────THE FERRARI IN THE SWAMP PROBLEM──────────────────────────────────────Companies are renting expensive tools to operate in environments where they cannot be used effectively. You are paying premium prices for capabilities your infrastructure cannot support. The tool is not the problem. The structural mismatch is.──────────────────────────────────────WHO PROFITS FROM THE MISALIGNMENT──────────────────────────────────────∙ Vendors selling you tools you cannot deploy∙ Consultants billing for transformations that collapse on contact∙ Cloud providers stacking credits while your ROI evaporates──────────────────────────────────────THE REAL PHYSICS OF YOUR BURN RATE──────────────────────────────────────This isn't digital transformation theater. It's capital efficiency and sovereignty.∙ Why subscription models drain budgets without building capacity∙ What sovereign infrastructure actually means for enterprise control∙ How to identify the gap between what you're buying and what you can use∙ The hidden cost of vendor dependency vs. building owned infrastructure──────────────────────────────────────FRAMEWORKS REFERENCED──────────────────────────────────────→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp→ Noise Discipline — humansignal.io/frameworks/noise-discipline→ The Trust Gap — humansignal.io/frameworks/trust-gap→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis→ Failure Files™ — humansignal.io/failure-files→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop──────────────────────────────────────SUPPORT THE SHOW──────────────────────────────────────Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.Help fuel independent AI governance research, new episodes, and the Failure Files™ series.🔗 https://theaigovernancebriefing.com/supportEvery contribution sustains the signal.──────────────────────────────────────ABOUT THE HOST──────────────────────────────────────Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.Independence is not a feature. It is the product.──────────────────────────────────────PRODUCTION NOTES──────────────────────────────────────Host & Producer: Dr. Tuboise FloydCreative Director: Jeremy JarvisA Human Signal ProductionRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.──────────────────────────────────────CONNECT──────────────────────────────────────Website: humansignal.ioPodcast: theaigovernancebriefing.comLinkedIn: linkedin.com/in/drtuboisefloydEmail: tuboise@theaigovernancebriefing.comGeneral inquiries: hello@theaigovernancebriefing.com──────────────────────────────────────TRANSCRIPT──────────────────────────────────────Full transcript available upon request at hello@theaigovernancebriefing.com──────────────────────────────────────LEGAL──────────────────────────────────────© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.──────────────────────────────────────TAGS──────────────────────────────────────#TheAIGovernanceBriefing #HumanSignal #AIGovernance #SovereignInfrastructure #AIBudget #EnterpriseAI #CapitalEfficiency #CloudCosts #VendorLockIn #VendorCapture #DigitalTransformation #AIROl #ITStrategy #LEACProtocol #GASP #NoiseDiscipline #TrustGap #FailureFiles #BuilderClass #AIPolicyThis podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
| 2/2/26 | The Builder Class: AI Governance Briefing for Federal and Enterprise Leaders | EPISODE DESCRIPTIONIf you are hearing this, you have successfully moved operations behind the wire. You are no longer a consumer. You are a Builder.The public internet is a noise engine designed to extract your attention. The AI Governance Briefing is a signal processing facility designed to build your leverage.We do not trade in content here. We trade in clearance.──────────────────────────────────────WHAT THIS MEANS──────────────────────────────────────∙ You've crossed the threshold from passive consumption to active building∙ This isn't another feed competing for your distraction∙ This is infrastructure — signal, not noise∙ This is leverage, not entertainment──────────────────────────────────────THE BUILDER CLASS──────────────────────────────────────This channel exists for people who don't just navigate systems — they design, deploy, and defend them. For federal IT leaders, university CIOs, enterprise strategists, and policy architects who understand that survival requires moving operations behind the wire.The public internet extracts. We construct.Welcome to the infrastructure. This is The AI Governance Briefing. This is your clearance.──────────────────────────────────────FRAMEWORKS REFERENCED──────────────────────────────────────→ Noise Discipline — humansignal.io/frameworks/noise-discipline→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp→ The Trust Gap — humansignal.io/frameworks/trust-gap→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis→ Failure Files™ — humansignal.io/failure-files→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop──────────────────────────────────────SUPPORT THE SHOW──────────────────────────────────────Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.Help fuel independent AI governance research, new episodes, and the Failure Files™ series.🔗 https://theaigovernancebriefing.com/supportEvery contribution sustains the signal.──────────────────────────────────────ABOUT THE HOST──────────────────────────────────────Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.Independence is not a feature. It is the product.──────────────────────────────────────PRODUCTION NOTES──────────────────────────────────────Host & Producer: Dr. Tuboise FloydCreative Director: Jeremy JarvisA Human Signal ProductionRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.──────────────────────────────────────CONNECT──────────────────────────────────────Website: humansignal.ioPodcast: theaigovernancebriefing.comLinkedIn: linkedin.com/in/drtuboisefloydEmail: tuboise@theaigovernancebriefing.comGeneral inquiries: hello@theaigovernancebriefing.com──────────────────────────────────────TRANSCRIPT──────────────────────────────────────Full transcript available upon request at hello@theaigovernancebriefing.com──────────────────────────────────────LEGAL──────────────────────────────────────© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.──────────────────────────────────────TAGS──────────────────────────────────────#TheAIGovernanceBriefing #HumanSignal #BuilderClass #NoiseDiscipline #AIGovernance #SignalNotNoise #AttentionManagement #StrategicLeverage #SystemsThinking #Infrastructure #GASP #TrustGap #LEACProtocol #FailureFiles #FederalIT #EnterpriseAI #AIPolicy #Clearance #GoverningAIThis podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
| 2/2/26 | FedRAMP Is Not Resilience: The Compliance vs. Readiness Gap in GovCon AI | EPISODE DESCRIPTIONIn this episode of The AI Governance Briefing, Dr. Tuboise Floyd draws the line that the DC corridor keeps refusing to draw: compliance is not readiness.We have a dangerous habit of celebrating the paperwork while ignoring the pulse. We hire leaders who can manage a Gantt chart, but we are not hiring architects who understand the physics of the risk.──────────────────────────────────────THE FALSE FINISH LINE──────────────────────────────────────A "finished" building isn't one that passed inspection. It's one that survives the first thermal runaway without blinking.For GovCon leaders: FedRAMP High is a certification. Resilience is a discipline.The loudest sound in a live environment isn't the generator testing. It's the silence after a logic error trips the transfer switch.──────────────────────────────────────WHAT THE MARKET ACTUALLY NEEDS──────────────────────────────────────The market doesn't need more people who can read the contract. It needs systems architects who can guarantee the signal.∙ Compliance gets you the contract∙ Readiness keeps the system alive∙ Certification proves you checked boxes∙ Resilience proves you understand physicsDon't just build the shell. Own the uptime.This is the difference between passing inspection and surviving first contact with reality.──────────────────────────────────────FRAMEWORKS REFERENCED──────────────────────────────────────→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp→ The Trust Gap — humansignal.io/frameworks/trust-gap→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol→ Noise Discipline — humansignal.io/frameworks/noise-discipline→ Failure Files™ — humansignal.io/failure-files→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop──────────────────────────────────────SUPPORT THE SHOW──────────────────────────────────────Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.Help fuel independent AI governance research, new episodes, and the Failure Files™ series.🔗 https://theaigovernancebriefing.com/supportEvery contribution sustains the signal.──────────────────────────────────────ABOUT THE HOST──────────────────────────────────────Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.Independence is not a feature. It is the product.──────────────────────────────────────PRODUCTION NOTES──────────────────────────────────────Host & Producer: Dr. Tuboise FloydCreative Director: Jeremy JarvisA Human Signal ProductionRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.──────────────────────────────────────CONNECT──────────────────────────────────────Website: humansignal.ioPodcast: theaigovernancebriefing.comLinkedIn: linkedin.com/in/drtuboisefloydEmail: tuboise@theaigovernancebriefing.comGeneral inquiries: hello@theaigovernancebriefing.com──────────────────────────────────────TRANSCRIPT──────────────────────────────────────Full transcript available upon request at hello@theaigovernancebriefing.com──────────────────────────────────────LEGAL──────────────────────────────────────© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.──────────────────────────────────────TAGS──────────────────────────────────────#TheAIGovernanceBriefing #HumanSignal #AIGovernance #ComplianceVsReadiness #FedRAMP #GovCon #SystemsArchitecture #ResilienceEngineering #FederalIT #InfrastructureResilience #RiskManagement #DCCorridor #GASP #TrustGap #LEACProtocol #WorkflowThesis #FailureFiles #BuilderClass #AIPolicy #GovernmentTechnologyThis podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
| 2/1/26 | The L.E.A.C. Protocol: Why Lithography, Energy, Arbitrage, and Cooling Determine Which AI Institutions Survive — The AI Governance Briefing | EPISODE DESCRIPTIONIn this episode of The AI Governance Briefing, Dr. Tuboise Floyd introduces the L.E.A.C. Protocol™ — the four physical constraints that determine which AI companies survive the infrastructure war.The market has split in two. One side is cutting jobs. The other is building hardware. If your AI strategy doesn't address all four constraints, you are leaking value.──────────────────────────────────────THE L.E.A.C. PROTOCOL™──────────────────────────────────────L — LithographyThe physics of chip manufacturing. If you cannot access cutting-edge fabrication, you are already obsolete. Control of the semiconductor supply chain — particularly photolithography equipment — is the first constraint every serious AI strategy must address.E — EnergyPower consumption isn't a footnote — it's the constraint. Without gigawatt-scale energy access, your models don't run. Securing reliable power is not an infrastructure decision. It is a strategic one.A — ArbitrageThe strategic positioning to exploit cost differentials in compute, power, and talent before competitors close the gap. Retail electricity pricing is unsustainable at scale. The organizations finding stranded energy, flare gas, and off-peak power are the ones surviving the burn rate.C — CoolingThermodynamics doesn't negotiate. Heat dissipation determines density, efficiency, and survivability. Without adequate cooling infrastructure, clusters cannot run. This is a fundamental solvency issue.──────────────────────────────────────THE BOTTOM LINE──────────────────────────────────────If your strategy ignores thermodynamics, you don't have a company. You have a fire hazard.This isn't about software features or model performance. It's about the physics that determines who stays online and who burns out — literally. The companies that master L.E.A.C. constraints will dominate. The ones that don't will exit.This is the infrastructure war.──────────────────────────────────────FRAMEWORKS REFERENCED──────────────────────────────────────→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp→ The Trust Gap — humansignal.io/frameworks/trust-gap→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis→ Failure Files™ — humansignal.io/failure-files→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop──────────────────────────────────────SUPPORT THE SHOW──────────────────────────────────────Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.Help fuel independent AI governance research, new episodes, and the Failure Files™ series.🔗 https://theaigovernancebriefing.com/supportEvery contribution sustains the signal.──────────────────────────────────────ABOUT THE HOST──────────────────────────────────────Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.Independence is not a feature. It is the product.──────────────────────────────────────PRODUCTION NOTES──────────────────────────────────────Host & Producer: Dr. Tuboise FloydCreative Director: Jeremy JarvisA Human Signal ProductionRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.──────────────────────────────────────CONNECT──────────────────────────────────────Website: humansignal.ioPodcast: theaigovernancebriefing.comLinkedIn: linkedin.com/in/drtuboisefloydEmail: tuboise@theaigovernancebriefing.comGeneral inquiries: hello@theaigovernancebriefing.com──────────────────────────────────────TRANSCRIPT──────────────────────────────────────Full transcript available upon request at hello@theaigovernancebriefing.com──────────────────────────────────────LEGAL──────────────────────────────────────© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.──────────────────────────────────────TAGS──────────────────────────────────────#TheAIGovernanceBriefing #HumanSignal #LEACProtocol #AIInfrastructure #AIGovernance #Lithography #EnergyConstraints #Thermodynamics #Cooling #AIHardware #Semiconductors #InfrastructureWar #AIStrategy #CapitalEfficiency #GASP #TrustGap #FailureFiles #BuilderClass #FrontierAI #PhysicsOfAIThis podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
| 1/31/26 | Project Cerebellum: Deploying Survivable AI in Federal and Enterprise Systems | EPISODE DESCRIPTIONIn this episode of The AI Governance Briefing, Dr. Tuboise Floyd opens Season 2 with a direct statement: we are driving 200 miles per hour with no brakes. We are deploying alien intelligence into critical infrastructure without a nervous system.Season 2 stops asking if AI will take your job. It starts asking if the system can survive the deployment.──────────────────────────────────────GUESTS──────────────────────────────────────Col. Kathy Swacina (USA, Ret.)CIO, SherpaWerxChair, HISPI AI Think Tank — Project Cerebellum🔗 https://sherpawerx.comTaiye LamboFounder & Chief Artificial Intelligence OfficerHolistic Information Security Practitioner Institute (HISPI)🔗 https://www.hispi.org🔗 https://projectcerebellum.comLinkedIn: linkedin.com/in/taiyelamboTAIMScore™ Assessor Workshop🔗 https://humansignal.io/taimscore_assessor_workshop──────────────────────────────────────PROJECT CEREBELLUM──────────────────────────────────────The critical missing layer in AI deployment: the control mechanisms, feedback loops, and governance structures that act as a nervous system for autonomous intelligence operating in high-stakes environments. Without it, the system cannot self-regulate, cannot escalate, and cannot stop.──────────────────────────────────────KEY QUESTIONS EXPLORED──────────────────────────────────────∙ What happens when AI operates in critical infrastructure without oversight mechanisms?∙ How do we build reflexive control systems for autonomous intelligence?∙ Why "move fast and break things" is a death sentence in federal and enterprise environments∙ The difference between deploying AI and deploying survivable AIThis isn't about slowing down innovation. It's about not crashing at 200 miles per hour.──────────────────────────────────────FRAMEWORKS REFERENCED──────────────────────────────────────→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp→ The Trust Gap — humansignal.io/frameworks/trust-gap→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol→ Failure Files™ — humansignal.io/failure-files→ TAIMScore™ Assessor Workshop — humansignal.io/taimscore_assessor_workshop→ Project Cerebellum — projectcerebellum.com──────────────────────────────────────SUPPORT THE SHOW──────────────────────────────────────Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.Help fuel independent AI governance research, new episodes, and the Failure Files™ series.🔗 https://theaigovernancebriefing.com/supportEvery contribution sustains the signal.──────────────────────────────────────ABOUT THE HOST──────────────────────────────────────Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.Independence is not a feature. It is the product.──────────────────────────────────────PRODUCTION NOTES──────────────────────────────────────Host & Producer: Dr. Tuboise FloydCreative Director: Jeremy JarvisA Human Signal ProductionRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.──────────────────────────────────────CONNECT──────────────────────────────────────Website: humansignal.ioPodcast: theaigovernancebriefing.comLinkedIn: linkedin.com/in/drtuboisefloydEmail: tuboise@theaigovernancebriefing.comGeneral inquiries: hello@theaigovernancebriefing.com──────────────────────────────────────TRANSCRIPT──────────────────────────────────────Full transcript available upon request at hello@theaigovernancebriefing.com──────────────────────────────────────LEGAL──────────────────────────────────────© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Guest opinions are those of the guest alone.──────────────────────────────────────TAGS──────────────────────────────────────#TheAIGovernanceBriefing #HumanSignal #AIGovernance #ProjectCerebellum #CriticalInfrastructure #AISafety #AutonomousAI #FederalAI #EnterpriseAI #AIDeployment #SurvivableAI #SystemResilience #GASP #TrustGap #LEACProtocol #FailureFiles #TAIMScore #HISPI #ColKathySwacina #TaiyeLambo #BuilderClass #AIPolicyThis podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
| 11/26/25 | Break the Mold | Building Teams, Leading Change & Impact (feat. Jasher Cox) | Human Signal: Season Finale Part 2 Show NotesEpisode Title: Break the Mold: Jasher Cox on Building Teams, Leading Change, and Owning ImpactGuest: Jasher Cox, Director of Regional Development University of Notre Dame (https://directory.nd.edu/profile/jcox23@nd.edu)Segment 1: The Broken System and Engineered TensionThe Rigged System: Jasher discusses how career ladders were dismantled and opportunities hoarded, confirming that the real problem was never a lack of talent.The Hiring Mistake: The danger of hiring people you like over subject-matter experts, which ultimately hurts the establishment. Focus must be on hiring experts for plug-and-play success.A Call for Social Skills: A focus on the degradation of social skills post-2020 due to disease and isolation. The critical importance of polishing social skills and actively engaging with leaders to build genuine, non-transactional relationships.Forcing Exposure: Parents and leaders must "force-feed" the next generation by bringing them to the table and exposing them to working environments to foster learning and growth.Segment 2: Architectural Mindset and Institutional ChangeThe Drive for Innovation: Jasher’s architectural mindset is rooted in collaboration and a focus on maximizing student enrollment.The HBCU Wrestling Story: The strategic decision to launch HBCU Women's Wrestling to add a unique student demographic and support emerging sports, creating a powerful recruiting success.The Power of Diversity (D.E.I.): D.E.I. is not just about race; it's about being open-minded and committed to hiring and supporting fully capable individuals, regardless of perceived limitations. The institutional commitment at Notre Dame to women's inclusion is highlighted as an example.Segment 3: Alliance, Building Bridges, and the Future of LeadershipCalling a Truce: The importance of moving from generational blame to alliance and bridge-building, recognizing that friction is necessary for growth.The AI Perspective: The younger generation's concern about the accuracy and over-reliance on AI, urging the return to basic research methods like visiting the library to learn how to find and retain information.The Best Coaching Mindset: Coaches and leaders must treat athletes (and employees) as capable athletes, not as fragile individuals, understanding that they want to be pushed for success ("Women don't want to be coached like girls, they want to be coached like athletes").Final Call to Action: Lead with Endearment: True leadership involves emotional intelligence and knowing when to show empathy (like letting an employee off early for a family event). You will be rewarded with a dedicated team member who knows you care.Core Philosophy: The segment emphasizes the final goal is to be good people and lead with the philosophy of cura personalis (care for the whole person).Next StepsShare your thoughts: Let us know what you learned from Jasher Cox in the comments!Subscribe now to lock in the feed. This isn't just content; it's a continuing briefing for the Builder Class.Tech Specs / Production Note:Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.If this mission resonates with you you can support the Human Signal launch fund to fuel six months of new episodes visual briefs and honest playbooks athttps://humansignal.io/supportCONNECTLinkedIn: linkedin.com/in/tuboiseEmail: tuboise@theaigovernancebriefing.comTRANSCRIPTFull transcript available upon request at hello@theaigovernancebriefing.com© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™.This podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
| 11/19/25 | Season 1 Finale Pt. 1: Burn the Playbook | Architecting What Doesn’t Exist (feat. Paul Wilson Jr.) | EPISODE DESCRIPTIONIn this episode of The AI Governance Briefing, Dr. Tuboise Floyd sits down with Paul Wilson Jr. — CEO of Paul Wilson Global Solutions — for the Season Finale Part 1: Burn the Playbook | Architecting What Doesn't Exist.This is a conversation about what happens when high-energy talent meets closed-minded institutions — and what it takes to build something that has never existed before.──────────────────────────────────────KEY DISCUSSION POINTS──────────────────────────────────────The Problem with Corporate AmericaPaul shares his experience with a system that actively discourages employee ambition — leading to high turnover, wasted talent, and institutions that cannot capitalize on the next generation of builders.The "Why Would We Want Them to Know That?" QuestionThe shocking perspective of closed-minded management that feared employee growth. When leadership treats knowledge as a threat rather than an asset, the institution has already started its collapse.Fire in the Context of a FireplaceThe metaphor for harnessing high-energy talent. The goal is not to extinguish it. It is to contain it in a context where it produces heat instead of destruction.The Digital Native BuilderGen Z energy in the workplace — what it looks like, what it signals, and how OG wisdom (Gen X/Boomers) can bridge the gap rather than widen it.Ghosting and AuthenticityGhosting is not rudeness. It is conflict avoidance rooted in fear — the flight response when inauthenticity is detected. Gen Z sniffs it out early and exits before the confrontation arrives.The Flaws in Modern Capitalism and InnovationThe broken pursuit of unicorn status. The one-size-fits-all growth model that discounts six-figure businesses and community impact. Why purpose-based entrepreneurship outlasts capital-chasing entrepreneurship.Capitalism Is RedeemableA critique of "capitalism at all costs" — and the case for investors and organizations with a social conscience strong enough to govern their own incentives.──────────────────────────────────────TACTICAL TAKEAWAYS──────────────────────────────────────∙ Be a bridge builder — the future of professional success requires alliance and the ability to disagree constructively∙ Embrace the pivot — Ice T, Ice Cube, and every durable builder mastered the ability to be both raw and polished when the context demands it∙ The learner's mindset — at every level of growth, there is new learning required; humility is not weakness, it is the entry fee∙ Lead with value proposition — when pursuing funding, lead with what you bring to the investor, not just the problem you want solved∙ Submission is not surrender — great leaders understand that submitting to a goal, a client, or a higher authority is how they practice sovereignty, not how they abandon it──────────────────────────────────────GUEST──────────────────────────────────────Paul Wilson Jr.CEO, Paul Wilson Global Solutions, LLC🔗 https://www.paulwilsonglobal.com──────────────────────────────────────FRAMEWORKS REFERENCED──────────────────────────────────────→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp→ The Trust Gap — humansignal.io/frameworks/trust-gap→ Noise Discipline — humansignal.io/frameworks/noise-discipline→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol→ Failure Files™ — humansignal.io/failure-files──────────────────────────────────────SUPPORT THE SHOW──────────────────────────────────────Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.Help fuel independent AI governance research, new episodes, and the Failure Files™ series.🔗 https://theaigovernancebriefing.com/supportEvery contribution sustains the signal.──────────────────────────────────────ABOUT THE HOST──────────────────────────────────────Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.Independence is not a feature. It is the product.──────────────────────────────────────PRODUCTION NOTES──────────────────────────────────────Host & Producer: Dr. Tuboise FloydCreative Director: Jeremy JarvisA Human Signal ProductionRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.──────────────────────────────────────CONNECT──────────────────────────────────────Website: humansignal.ioPodcast: theaigovernancebriefing.comLinkedIn: linkedin.com/in/drtuboisefloydEmail: tuboise@theaigovernancebriefing.comGeneral inquiries: hello@theaigovernancebriefing.com──────────────────────────────────────TRANSCRIPT──────────────────────────────────────Full transcript available upon request at hello@theaigovernancebriefing.com──────────────────────────────────────LEGAL──────────────────────────────────────© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Guest opinions are those of the guest alone.──────────────────────────────────────TAGS──────────────────────────────────────#TheAIGovernanceBriefing #HumanSignal #BurnThePlaybook #SeasonFinale #PaulWilsonJr #BuilderClass #Entrepreneurship #PurposeDrivenBusiness #GenZ #DigitalNative #Leadership #Ghosting #Authenticity #TalentRetention #CapitalismRedeemed #PivotMindset #LearnersM indset #AIGovernance #GASP #NoiseDiscipline #WorkflowThesis #LEACProtocolThis podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
| 11/17/25 | Break the Mold: Jasher Cox on Building Teams, Leading Change, and Owning Impact | EPISODE DESCRIPTIONIn this episode of The AI Governance Briefing, Dr. Tuboise Floyd is joined by Jasher Cox, Director of Regional Development at the University of Notre Dame, to deconstruct the mechanics of leading change and owning impact.Leadership is not a title. It is an architecture.When standard scripts for team building fail, you have to break the mold. This isn't about management styles. It's about building high-performance systems that survive contact with reality.──────────────────────────────────────KEY INTELLIGENCE──────────────────────────────────────∙ Building Teams — moving beyond the roster to the ecosystem∙ Leading Change — how to pivot without breaking the structure∙ Owning Impact — measuring the signal, not just the noise──────────────────────────────────────GUEST──────────────────────────────────────Jasher CoxDirector of Regional DevelopmentUniversity of Notre Dame🔗 https://directory.nd.edu/profile/jcox23@nd.edu──────────────────────────────────────FRAMEWORKS REFERENCED──────────────────────────────────────→ GASP™ (Governance As a Structural Problem) — humansignal.io/frameworks/gasp→ The Trust Gap — humansignal.io/frameworks/trust-gap→ The Workflow Thesis — humansignal.io/frameworks/workflow-thesis→ Noise Discipline — humansignal.io/frameworks/noise-discipline→ L.E.A.C. Protocol™ — humansignal.io/leac-protocol→ Failure Files™ — humansignal.io/failure-files──────────────────────────────────────SUPPORT THE SHOW──────────────────────────────────────Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.Help fuel independent AI governance research, new episodes, and the Failure Files™ series.🔗 https://theaigovernancebriefing.com/supportEvery contribution sustains the signal.──────────────────────────────────────ABOUT THE HOST──────────────────────────────────────Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.Independence is not a feature. It is the product.──────────────────────────────────────PRODUCTION NOTES──────────────────────────────────────Host & Producer: Dr. Tuboise FloydCreative Director: Jeremy JarvisA Human Signal ProductionRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.──────────────────────────────────────CONNECT──────────────────────────────────────Website: humansignal.ioPodcast: theaigovernancebriefing.comLinkedIn: linkedin.com/in/drtuboisefloydEmail: tuboise@theaigovernancebriefing.comGeneral inquiries: hello@theaigovernancebriefing.com──────────────────────────────────────TRANSCRIPT──────────────────────────────────────Full transcript available upon request at hello@theaigovernancebriefing.com──────────────────────────────────────LEGAL──────────────────────────────────────© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice. Guest opinions are those of the guest alone.──────────────────────────────────────TAGS──────────────────────────────────────#TheAIGovernanceBriefing #HumanSignal #AIGovernance #Leadership #LeadingChange #TeamBuilding #HighPerformance #BuilderClass #JasherCox #NotreDame #OwningImpact #SystemsThinking #GASP #TrustGap #WorkflowThesis #NoiseDiscipline #LEACProtocol #InstitutionalLeadership #SignalNotNoiseThis podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
| 11/16/25 | Testing the Broadcast Signal (A Raw Voiceover Test) | EPISODE DESCRIPTIONThe signal is live.This is the raw calibration of The AI Governance Briefing frequency. Before the strategy, before the architecture, there is the voice. This test establishes the baseline for the intelligence to come.Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class.──────────────────────────────────────SUPPORT THE SHOW──────────────────────────────────────Help fuel independent AI governance research, new episodes, and the Failure Files™ series.🔗 https://theaigovernancebriefing.com/supportEvery contribution sustains the signal.──────────────────────────────────────ABOUT THE HOST──────────────────────────────────────Dr. Tuboise Floyd is the Founder and Chief Sensemaking Officer of Human Signal — an independent AI governance research and media platform based in Washington, DC. He is the Editor in Chief of The AI Governance Record, Host of The AI Governance Briefing, and a TAIMScore™ Certified Assessor (HISPI, March 2026).A PhD social scientist (Auburn University, Adult Education / Systems Theory), Dr. Floyd reverse-engineers institutional AI failures and builds governance frameworks that operators can actually use. His canonical thesis: most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it.Independence is not a feature. It is the product.──────────────────────────────────────PRODUCTION NOTES──────────────────────────────────────Host & Producer: Dr. Tuboise FloydCreative Director: Jeremy JarvisA Human Signal ProductionRecorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound.──────────────────────────────────────CONNECT──────────────────────────────────────Website: humansignal.ioPodcast: theaigovernancebriefing.comLinkedIn: linkedin.com/in/drtuboisefloydEmail: tuboise@theaigovernancebriefing.comGeneral inquiries: hello@theaigovernancebriefing.com──────────────────────────────────────TRANSCRIPT──────────────────────────────────────Full transcript available upon request at hello@theaigovernancebriefing.com──────────────────────────────────────LEGAL──────────────────────────────────────© 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™, and L.E.A.C. Protocol™. Human Signal is an independent research and media platform. Nothing in this episode constitutes legal, regulatory, compliance, or professional advice.──────────────────────────────────────TAGS──────────────────────────────────────#TheAIGovernanceBriefing #HumanSignal #AIGovernance #SignalIsLive #BuilderClass #Trailer #Calibration #SignalNotNoise #DrTuboiseFlo yd #IndependentMedia #AIPolicy #GoverningAIThis podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy | — | ||||||
Showing 25 of 35
Sponsor Intelligence
Sign in to see which brands sponsor this podcast, their ad offers, and promo codes.
Chart Positions
2 placements across 1 market.
Chart Positions
2 placements across 1 market.

