Insights from recent episode analysis
Audience Interest
Podcast Focus
Publishing Consistency
Platform Reach
Insights are generated by CastFox AI using publicly available data, episode content, and proprietary models.
Est. Listeners
Based on iTunes & Spotify (publisher stats).
- Per-Episode Audience
Est. listeners per new episode within ~30 days
10,001 - 25,000 - Monthly Reach
Unique listeners across all episodes (30 days)
25,001 - 75,000 - Active Followers
Loyal subscribers who consistently listen
5,001 - 15,000
Market Insights
Platform Distribution
Reach across major podcast platforms, updated hourly
Total Followers
—
Total Plays
—
Total Reviews
—
* Data sourced directly from platform APIs and aggregated hourly across all major podcast directories.
On the show
Recent episodes
AI @ Berry
May 4, 2026
51m 06s
Drug Development and Sports: The 10-Run Rule and Futility
Apr 27, 2026
51m 59s
ICH-E20, Regulators, and False Choices
Apr 20, 2026
41m 02s
PANTHER: A Phase 2 International Platform Trial in ARDS
Apr 13, 2026
52m 42s
A Visit with Byron Gajewski: KUMC, Innovative Trial Designs, the HOBIT Trial
Apr 6, 2026
40m 23s
Social Links & Contact
Official channels & resources
Official Website
Login
RSS Feed
Login
| Date | Episode | Description | Length | ||||||
|---|---|---|---|---|---|---|---|---|---|
| 5/4/26 | AI @ Berry | In the 60th episode of “In the Interim…”, Dr. Scott Berry, Dr. Nick Berry, and Dr. Joe Marion discuss how Berry Consultants uses AI in clinical trial design and software development. The conversation addresses current applications, limitations, implications for productivity, and the ongoing need for human expertise in clinical trial design. The team examines both promising use cases and the risks associated with security, compliance, and AI-generated statistical work.Key HighlightsAI is used to develop user interfaces and code modules, notably expediting tasks like R Shiny app development and software prototyping.Statistical coding for complex modeling and simulation—such as numerical integration and predictive probability calculations—remains unreliable when delegated to AI and still requires direct oversight and manual review.Attention to security and confidentiality is central; Berry prohibits the use of client-sensitive or patient data within AI tools.Generative AI assists with drafting and editing documents, but the output tends to be non-specific, generic, and sometimes imprecise, requiring expert editorial input before use.While embracing AI to improve efficiency, the discussion is critical of current AI hype, especially around black-box modeling and pushes back against the perception that current AI can replace domain-specific statistical design or strategic judgment.For more, visit us at https://www.berryconsultants.com/ | 51m 06s | ||||||
| 4/27/26 | Drug Development and Sports: The 10-Run Rule and Futility | In this episode of "In the Interim…", Dr. Scott Berry and Dr. Nick Berry investigate how futility in clinical trials and stopping rules in sports illuminate very similar decision problems, albeit with very different consequences. Drawing from baseball’s 10-run rule, tournament cuts in golf, the discussion confronts traditional and Bayesian strategies for interim decisions. The episode explains why simulation, not historical trial review, provides the empirical backbone for futility boundaries in clinical trials, and details the mechanics and consequences of aggressive stopping criteria. Using the Biogen aducanumab Alzheimer’s trials, the conversation exposes how a futility rule based on 20% predictive probability halted trials even when meaningful probability of success remained. Scott and Nick address the influence of ethical considerations, cost, regulatory priorities, and statistical rigor, and contrast Bayesian predictive probability’s strengths over conditional power.Key HighlightsDissects sports futility rules (10-run rule, golf cuts, Bill James heuristic) and their application to clinical trial designArgues for prospective simulation to define adaptive futility thresholdsExplains how Bayesian predictive probability provides a more robust framework than conditional probability for interim adaptive decisionsDetails how aggressive futility criteria may prematurely stop trials and risk missing beneficial treatments, as in the aducanumab caseExplores the intersection of ethics, patient safety, operational efficiency, regulatory standards, and trial cost | 51m 59s | ||||||
| 4/20/26 | ICH-E20, Regulators, and False Choices | In this episode of "In the Interim…", host Dr. Scott Berry undertakes a detailed, methodical critique of ICH-E20 draft guidance language as applied to adaptive clinical trial design. Focusing on an innocuous but corruptible paragraph in Section 3.1, Scott scrutinizes the logic behind regulatory reluctance to appreciate multiple or complex adaptations in confirmatory trials. Drawing on extensive experience, he highlights how such restrictive interpretations do not reflect practical development realities, instead setting up “false choices” where alternative designs desired by regulators are infeasible. Through operational scenarios—including the SEPSIS-ACT trial, an enrichment design, and sample size re-estimation examples—Scott illustrates the empirical benefits of seamless and multi-adaptive trials for sponsors, patients, and regulators. Technical discussion addresses misconceptions about complexity and bias and stresses the value of presenting realistic alternatives when engaging with regulatory authorities. The episode ultimately encourages a more nuanced dialogue to advance efficient and scientifically robust clinical trials.Key HighlightsDiscussion of ICH-E20 section 3.1 guidance and its operational impact on adaptive designs.Dissection of “false choice” dilemmas in regulatory interactions, referencing real adaptive trial submissions.Case-based examples: SEPSIS-ACT, enrichment, and sample size adaptation trials.Highlighting myths regarding bias and operational burden from multiple interim analyses.Emphasis on practical strategies for more effective regulatory communication about adaptive trials and realistic alternatives.For more, visit us at https://www.berryconsultants.com/ | 41m 02s | ||||||
| 4/13/26 | PANTHER: A Phase 2 International Platform Trial in ARDS | In this episode of "In the Interim…" Dr. Scott Berry is joined by Professors Victoria Cornelius, Danny McAuley, and Anthony Gordon, for a technical review of the PANTHER trial—an international, Phase 2 adaptive platform evaluating pharmacologic interventions for ARDS. The trial is open-label and does not employ blinding, as discussed in the episode. The primary endpoint is 28-day organ support-free days (death as -1, survivors 0–28 days), analyzed with a Bayesian proportional odds model. PANTHER uses stratification by hyper- and hypoinflammatory subphenotypes, with fixed, equal randomization within each stratum. Analyses for treatments are separated by stratum, reflecting the potential of differential treatment effects. Quarterly interim analyses allow early stopping by stratum for efficacy or futility. Content includes explicit discussion of infrastructure: rapid device deployment, centralized data for trial and future biological discovery, and governance challenges in multinational collaboration. Funding is provided by NIHR (UK), US Department of Defense, CIHR (Canada), NHMRC and MRFF (Australia), HRB (Ireland), and additional support from Germany and Japan. PANTHER is positioned to streamline Phase 2 critical care drug testing and facilitate graduation to larger platforms such as REMAP-CAP, with potential to expedite pharmaceutical evaluation and accelerate ARDS therapeutic development.Key HighlightsReal-time phenotyping (Randox device) to stratify ARDS patients.Separate Bayesian analyses by phenotype stratum.Open-label, fixed randomization within stratum.28-day organ support-free days as a composite endpoint.Quarterly interim analyses enable early dropping or graduation of arms by strata.Central data resource and biosample collection for future research.Operational, funding, and device logistics for global trial deployment.Transition of Phase 2 results to established Phase 3 platforms (e.g., REMAP-CAP).For more, visit us at https://www.berryconsultants.com/ | 52m 42s | ||||||
| 4/6/26 | A Visit with Byron Gajewski: KUMC, Innovative Trial Designs, the HOBIT Trial | In this episode of "In the Interim…", Dr. Scott Berry connects with Dr. Byron Gajewski, professor of biostatistics and data science at the University of Kansas Medical Center (KUMC), for a detailed discussion on the design, simulation, and operational realities of Bayesian adaptive clinical trials in academic environments. Gajewski discusses his academic background, training at Texas A&M, and progressive adoption of Bayesian principles based on direct experiential advantages in complex data settings. The conversation highlights KUMC’s Fixed and Adaptive Clinical Trial Simulator Working Group, which utilizes FACTS for faculty, staff, and student collaboration, enabling practical simulation, trial protocol development, and in-house applied statistical training. The PAIN-CONTRoLS Trial serves as a practical example of multi-arm Bayesian adaptive design, using response-adaptive randomization for comparative effectiveness in neuropathy research. The NIH-funded HOBIT trial is examined in detail: multi-arm structure, adaptive allocation among investigational arms, fixed control randomization, group-sequential interim analyses, and sliding dichotomy methodology for the Glasgow Outcome Scale Extended. The discussion stresses a shift to probabilistic, evidence-driven interpretation and reporting, shaping operational choices and academic culture for both investigators and trainees.Key HighlightsGajewski describes how practical challenges in real-world problems catalyzed his transition to Bayesian modeling.KUMC’s working group integrates FACTS software in collaborative simulation and operational trial planning.The PAIN-CONTRoLS Trial: multi-arm Bayesian adaptive design, response-adaptive randomization, real-time analysis, and endpoint-driven allocation.HOBIT trial: Adaptive allocation, fixed control arm proportion, group-sequential interims, ordinal endpoint modeling.Emphasis on probabilistic, quantitative reporting over binary outcomes in trial analysis and interpretation.Cultural shift observed among academic collaborators and trainees embracing Bayesian adaptive strategies. | 40m 23s | ||||||
| 3/30/26 | A Visit with Stephen Senn: Time, Concurrent Controls, and the Bayesian Guidance | In this episode of "In the Interim...", Dr. Scott Berry hosts Dr. Stephen Senn, award-winning statistician and author, for a discussion on advanced challenges in adaptive and platform trial methodology. Senn draws on experience in academic, pharmaceutical, and regulatory settings to address the recent draft guidance on Bayesian statistics from the FDA and multiple controversies in clinical trial design.Key HighlightsEmphasizes understanding data origin and regression to the mean as essential for trial interpretation, above adherence to Bayesian or frequentist frameworks.Details methodological considerations for time adjustments and model complexity, highlighting that model specification and parameter handling are critical regardless of statistical school.Identifies the limitations of non-concurrent controls in platform trials, focusing on evolving background therapy, site participation, and protocol changes that reduce validity of historical or pooled control data.Analyzes blinding difficulties in trials with multiple treatments and administration modes, using “veiled” blinding as a case and noting the implications for placebo response comparability.Clarifies that operational efficiencies are the principal advantage of adaptive and platform trials, while purported statistical efficiencies can be exaggerated.Stresses the importance of presenting interim analyses transparently to DSMBs when using complex models for time or covariate adjustment, to ensure oversight and interpretation remain rigorous.For more, visit us at https://www.berryconsultants.com/ | 47m 24s | ||||||
| 3/23/26 | Making Sense of Hierarchical Composites | In this episode of "In the Interim…", Dr. Scott Berry is joined by statisticians Dr. Amy Crawford, Dr. Cora Allen-Savietta, and Dr. Jessica Overbey for a technical deep dive into hierarchical composite endpoints and the win ratio in clinical trial design. The group addresses clinical and statistical justifications for layered endpoint structures, demonstrates the mechanics of pairwise win ratio analysis, and explores operational and interpretive consequences in both conventional and adaptive trials. The panel scrutinizes analytic limitations, regulatory concerns, and emerging modeling strategies—all grounded in real-world trial examples.Key HighlightsPrecise definition and use case for hierarchical composite endpoints in cardiovascular and related trials.Stepwise breakdown of win ratio mechanics, tie-handling, and the distinction between effect estimation (win ratio) and hypothesis testing (FS-test).Discussion of endpoint prevalence and dominance, risk of clinical interpretation being tied to lower-order outcomes, the role of patient exposure, and methods to parse component contributions.Overview of statistical power, role of simulation, and comparative advantages over other composite approaches.Identification of core limitations: interpretive complexity, opaque weighting, and mutable meaning of wins with maturing data.Review of predictive probability for adaptive interim analysis and modeling using ordinal regression.Opinions of US and European regulatory perspectives including support, reservations, and expectations for transparency with graphics and complementary analyses.For more, visit us at https://www.berryconsultants.com/ | 53m 32s | ||||||
| 3/16/26 | The SNAP Trial with Professors Tong and Davis | In this episode of "In the Interim…", Dr. Scott Berry interviews Professors Steven Tong and Josh Davis about the SNAP platform trial for Staphylococcus aureus bacteremia. The discussion covers SNAP’s rationale, large-scale adaptive design, methodology, and operational execution at approximately 150 hospitals in 13 countries. Key statistical questions, domain results, pediatric-adult analysis, and global implementation strategy are explored in depth. Listeners will find clear examples of how adaptive platform trials can efficiently address clinically relevant questions in infectious disease, while highlighting the nuances of trial design, statistical thresholds, and network collaboration.Key HighlightsHigh and unchanging mortality for Staphylococcus aureus bacteremia—over one million deaths annually.SNAP leverages silo-based structure (MSSA, MRSA, PSSA) and factorial domains for simultaneous, efficient investigation of treatments.Cefazolin shown non-inferior to flucloxacillin for MSSA with lower related acute kidney injury.In PSSA, penicillin demonstrated significantly less toxicity and favorable mortality signal over flucloxacillin; mortality difference did not meet the statistical superiority threshold.Futility reached in the adjunctive clindamycin domain for effect on 90-day mortality.Both adults and children enrolled, with pediatric results using statistical borrowing from adults in line with FDA Bayesian guidance.Ongoing platform expansion includes bacteriophage therapy, antiplatelet domains, and evaluation of diagnostic strategies.Statistical leadership: Dr. Anna McGlothlin (Berry Consultants), Dr. Julie Marsh (statistics lead).For more, visit us at https://www.berryconsultants.com/ | 53m 53s | ||||||
| 3/9/26 | Bayesian Borrowing in Phase 3 Trials | In this episode of "In the Interim…", Dr. Scott Berry and Dr. Kert Viele examine Bayesian borrowing in Phase 3 clinical trials, focusing on statistical handling of prior information and real-world FDA interactions. The episode opens with an analogy, comparing prior probability in Bayesian analysis to interpreting a home pregnancy test, succinctly demonstrating the effect of prior knowledge on trial interpretation. The discussion addresses technical challenges—how borrowing inflates Type I errors and why this is addressed differently under Bayesian operating characteristics. Concrete examples include dynamic versus static borrowing approaches, and formal integration of prior evidence in regulatory submissions. Case studies center on the WATCHMAN device (PROTECT AF and PREVAIL trials) and REBYOTA, illustrating FDA engagement, relevant trial design tactics, and published outcomes. The episode also critiques common pitfalls such as selective data use and improper prior construction, emphasizing the FDA’s focus on comprehensive and unbiased historical sources.Key HighlightsPregnancy test analogy used to clarify prior probability in trial interpretation.Bayesian borrowing’s effects on Type I error and statistical thresholds.Case studies: WATCHMAN device (PROTECT AF, PREVAIL) and REBYOTA approvals.Dynamic borrowing versus static borrowing strategies in regulatory settings.Risks of cherry-picking and importance of unbiased, relevant prior data.FDA guidance and review procedures for Bayesian trials.For more, visit us at https://www.berryconsultants.com/ | 46m 38s | ||||||
| 3/2/26 | The Art of Storytelling with Shaun Cassidy | In Episode 51 of "In the Interim…", Dr. Scott Berry interviews writer, producer, and performer Shaun Cassidy to examine the practical elements of storytelling that matter in scientific and statistical communication. Cassidy draws on his experience in television, music, and live performance—including his role as writer and Executive Producer of New Amsterdam—to present clear parallels between audience engagement in show business and in clinical research. The conversation prioritizes improving narrative precision, emotional resonance, and authenticity when conveying complex topics to varied audiences.Key HighlightsCassidy demonstrates that audiences retain emotional impact over factual content, asserting that “people don’t remember what you say, but how you made them feel.”Emphasis on narrative specificity: personal, concrete details foster stronger audience connection than generalized statements, countering assumptions about broad relatability.Effective communication relies on reactive delivery—improvised response to audience cues—rather than rigid, memorized scripts; Cassidy notes this principle applies across disciplines.Role of authenticity and vulnerability: openly stating discomfort or introversion facilitates greater audience trust and personal connection, especially in technical or scientific fields.Anecdotes from Cassidy’s work in television, music, and teaching illustrate the central role of storytelling structure and audience feedback, with parallels drawn to professional scientific presentations.Alan Alda’s illustration of improv for scientists is discussed as an example of bridging technical expertise with adaptive communication skills.For more, visit us at https://www.berryconsultants.com/ | 52m 22s | ||||||
Want analysis for the episodes below?Free for Pro Submit a request, we'll have your selected episodes analyzed within an hour. Free, at no cost to you, for Pro users. | |||||||||
| 2/23/26 | The Fallacy of Ordinal Endpoints | In this episode of "In the Interim…", Dr. Scott Berry and Dr. Lindsay Berry investigate the statistical foundations and clinical implications of analyzing ordinal endpoints, drawing on experience from major stroke and COVID-19 trials. Discussion centers on the Modified Rankin Scale, DAWN, MR CLEAN, and REMAP-CAP, demonstrating that methods such as proportional odds, dichotomization, and utility weighting all impose explicit or implicit clinical weights on the outcome categories. The episode presents direct mathematical derivations, exposes the equivalence between proportional odds models and value-weighted analysis, and uses real trial data to explore how statistical and clinical perspectives on endpoint weighting may diverge. Emphasis remains on transparency and the need for clinically relevant weight assignment in trial endpoints.Key HighlightsStructural overview and clinical significance of the Modified Rankin Scale scores.Illustration that proportional odds models and dichotomized analyses apply hidden, prevalence-driven or threshold-based weights.Utility weighting in DAWN, formulated from EQ-5D patient utilities and economic studies, with observed alignment.MR CLEAN investigators' critique of utility weighting; empirical data demonstrated relative consistency and challenged the claim that statistical approaches resolve variation across patients.REMAP-CAP platform trial: Organ Support Free Days endpoint analyzed with proportional odds imposed weights on the scale from death to free of organ support .Extension of these arguments to win ratio/rank-based approaches, with caution that all methods encode clinical assumptions.For more, visit us at https://www.berryconsultants.com/ | 43m 54s | ||||||
| 2/16/26 | Mr. Berry Goes to Washington | In this episode of "In the Interim…", Dr. Scott Berry marks the podcast’s one-year anniversary, sharing listener metrics, watch data, and regional engagement. He then delivers a step-by-step analysis of the FDA meeting process, detailing the progression from initial sponsor meeting requests and question submission to briefing book preparation, feedback cycles, and in-person logistics for a Type C meeting at the White Oak facility. Drawing from more than 25 years of trial design and regulatory experience, Scott offers precise guidance on technical preparation, sponsor responsibilities, and common errors in sponsor-FDA dialog, emphasizing what works and what wastes time inside the one-hour meeting constraint. His practical approach focuses on clarity, respect for process, and actionable advice.Key HighlightsSlightly over 30,000 people tuned in during the first year across 45 episodes; about 10,000 via audio, 20,000 via video with a global worldwide reach.FDA meeting workflow: request, submit four to eight questions, draft briefing book, receive written feedback, strict one-hour in-person discussion controlled by sponsor.Advice on briefing book content, avoiding new materials at the meeting, even what not to bring through the White Oak facility.Sponsor pitfalls: disingenuous patient advocacy, asking impossible questions, taking adversarial stance in statistical discussion.For more, visit us at https://www.berryconsultants.com/ | 47m 14s | ||||||
| 2/9/26 | Platform Trial in Orthopaedic Surgery | Dr. Nathan O’Hara (University of Maryland), Dr. Gerard Slobogean (UC Irvine), and Dr. Sheila Sprague (McMaster University) describe the launch and design of the Musculoskeletal Adaptive Platform Trial (MAPT)—the first major adaptive platform trial in orthopaedic surgery. The discussion covers MAPT’s master protocol structure, patient-centered endpoint framework, and operational strategies for multinational implementation. Focus areas include the FASTER-HIP domain’s use of Bayesian modeling with a hierarchical clinical endpoint and the standards established for adaptation, data coordination, and future scalability. Listeners gain insight into a trial infrastructure designed to lower barriers for evidence generation and facilitate ongoing evidence generation in musculoskeletal trauma care.Key HighlightsMAPT as a scalable, master protocol for orthopaedic intervention evaluationHierarchical, patient-centered endpoint (survival, 4-level ambulation, days alive/out of hospital), analyzed with a Bayesian-modeled, non-parametric win ratioDomain-specific adaptation thresholds based on clinical differentiationInterim analyses after 100 patients, then every 50, informing early adaptation40 sites across US, Canada, and Europe, centralized data management at McMasterA unified DSMB structure with capacity for domain-specific expertise as neededTiered protocol access: open sharing, collaboration, direct integrationInfrastructure enables rapid domain addition and multi-investigator participationFor more, visit us at https://www.berryconsultants.com/ | 40m 56s | ||||||
| 2/2/26 | A Visit with Michael Harhay | In this episode of "In the Interim…", Dr. Scott Berry speaks with Dr. Michael Harhay, Associate Professor at the University of Pennsylvania and Director of the Center for Clinical Trials Innovation. The conversation explores Dr. Harhay’s progression through neuroscience, philosophy, epidemiology, and statistics, examining how this academic path shapes his work in clinical trial methodology. They discuss the Center’s role in addressing unresolved methodological questions arising from pragmatic, health system-based trials, including challenges with cluster and factorial randomized designs. The episode focuses on statistical and conceptual issues in endpoint selection for critical care, such as the analysis of informatively truncated outcomes, composite endpoints including organ support-free days, and the application of the win ratio. The increasing use of Bayesian methods in trial design is addressed.Key HighlightsDr. Harhay’s academic background and transition into clinical trial methodology at Penn.The mission of the Center for Clinical Trials Innovation to support methodologic research and training, particularly among statisticians participating in multi-center health system trials.Discussion of hospital-level and provider-level randomization strategies in cluster and factorial designs within health systems.Ongoing challenges in analysis of composite and informatively truncated endpoints, especially in critical care, exemplified by ventilator-free and organ support-free days.Evaluation of analytic strategies including survival average causal effect, composite endpoints, and the win ratio, with emphasis on the need for clinical rather than purely statistical weighting of outcomes.Consideration of the conceptual strengths of Bayesian methods and their integration into modern trial design and decision analysis.For more, visit us at https://www.berryconsultants.com/ | 39m 08s | ||||||
| 1/26/26 | The FDA Bayesian Guidance | In this episode of "In the Interim…", Dr. Scott Berry and Dr. Kert Viele deliver a quick reaction to the FDA’s draft guidance on Bayesian statistics for clinical trials of drugs and biologics. Their assessment addresses the structure, content, and impact of the document, emphasizing evidence-based requirements and guidance scope. The episode breaks down regulatory language, technical expectations, and workflow implications for clinical trial sponsors and statisticians.Key HighlightsClear distinction between trials justified by type 1 error control and trials justified by agreement on Bayesian priors and decision rule.Explanation of how informative priors can be created based on external or historical data.Technical explanation of dynamic discounting/borrowing, especially in Bayesian hierarchical models for rare populations, pediatric-adult extrapolation, related disease subgroups, and platform and basket trials (e.g., ROAR).In-depth look at the necessity of sensitivity and robustness checks for different priors, and the FDA’s design prior and analysis prior terminology.FDA’s requirements for accepting external data sources: data provenance, patient-level comparability, recency, and appropriate covariate adjustments.Comparison with ICH E20 on adaptive designs, providing context for ongoing regulatory harmonization and possible influence on international regulatory directions.Direct warning against attempts to misuse Bayesian methodology as a substitute for scientific rigor; legitimate uses must meet FDA standards and not simply serve to lower evidentiary bars.Resource: FDA News Release: https://www.fda.gov/news-events/press-announcements/fda-issues-guidance-modernizing-statistical-methods-clinical-trialsFor more, visit us at https://www.berryconsultants.com/ | 43m 20s | ||||||
| 1/19/26 | Path 2 Parkinson's Prevention with Drs. Simuni and Wendelberger | In this episode of "In the Interim…", Dr. Scott Berry is joined by Dr. Tanya Simuni, Arthur C. Nielsen Jr. Professor of Neurology and Director of the Parkinson’s Disease and Movement Disorders Center at Northwestern University, and Dr. Barbara Wendelberger, Senior Statistical Scientist at Berry Consultants. The conversation focuses on the Path to Prevention (P2P) platform trial—an international, multi-arm prevention study in Parkinson’s disease targeting participants defined by biological markers, specifically alpha-synuclein pathology, prior to clinical diagnosis. The discussion covers the PPMI cohort, trial operational and statistical structure, the rationale behind biomarker-driven inclusion, and the use of Bayesian platform trial design.Key Highlights:Parkinson’s disease pathobiology and risk: genotype-phenotype variability, multi-system involvement, and the central roles of age, environment, and genetics.Michael J. Fox Foundation’s PPMI cohort: 4,000+ participants, prospective longitudinal biomarker and clinical data, high participant retention, enabling study of early Parkinson’s.P2P platform structure: multi-arm design, two-stage randomization with shared placebo group, integration of non-randomized PPMI cohort in Bayesian analysis for improved inference.Inclusion criteria: prodromal population biologically defined by CSF alpha-synuclein seed amplification and dopaminergic imaging (DAT-SPECT), highlighting regulatory nuances.Dual primary endpoints: biomarker (DAT-SPECT) and clinical (MDS-UPDRS Part III), 24-36 months follow-up.Commitment to public data sharing in line with the Michael J. Fox Foundation’s open science philosophy.For more, visit us at https://www.berryconsultants.com/ | 41m 39s | ||||||
| 1/12/26 | Statistical Communication | In this episode of “In the Interim…,” host Dr. Scott Berry examines the challenge of communicating complex statistical concepts to non-statistical audiences. Drawing from firsthand experiences in agriculture, professional golf, and clinical development, as well as examples involving historical and scientific figures, Scott reflects on why technical rigor alone often fails to influence. The discussion focuses on the consequences of mismatched language, the importance of empathy, and the utility of simulation when bridging the gap between analysis and stakeholder understanding.Key HighlightsIllustrated barriers to statistical communication using stories from farming, golf, and early career encounters.Examples involving John Glenn, Ada Lovelace, and Charles Babbage show how communication, not just science, determines impact.Insights from Alan Alda on empathy as a foundational tool for scientists presenting technical ideas.Clinical trial simulations revealed knowledge gaps—such as misunderstanding of power—when communicating with decision-makers.Emphasizes the necessity of translating analytic outputs into operational, financial, or clinical language for meaningful impact.For more, visit us at https://www.berryconsultants.com/ | 41m 31s | ||||||
| 12/29/25 | The Rumor of One Trial for Substantial Evidence | In this episode of "In the Interim…", host Dr. Scott Berry and frequent co-host Dr. Kert Viele, Senior Statistical Scientist at Berry Consultants, analyze the potential shift in FDA regulatory policy from requiring two independent trials to accepting a single trial as sufficient for “substantial evidence” in drug approvals. Reflecting on the statutory and regulatory definitions originating with the 1962 Federal Food, Drug, and Cosmetic Act and 21 CFR 314.126, they dissect current and emerging interpretations, referencing recent statements by Dr. Martin Makary and coverage described in a STAT article. The conversation focuses on the scientific and statistical foundations of the two-trial threshold, challenges with dichotomous results, and how pooled evidence might increase efficiency and rigor. They discuss statistical implications including alpha thresholds, sample size effects, program power, and the consequences for clinical labeling. The episode also introduces Bayesian approaches as a method for integrating totality of evidence. Attention is given to both population breadth and the possible risks of a narrowed evidentiary base under a single-trial standard.Key HighlightsRegulatory and historical context of “substantial evidence” since 1962 and current FDA directives.Industry practice: simultaneous Phase III trials, statistical power, and evidentiary replication.Criticism of binary, trial-level significance thresholds; merits of pooling or meta-analysis.Potential efficiency gains and tradeoffs with a more stringent alpha requirement for single trials.Strategic and operational effects on trial design, sample size, and label indications.Bayesian statistical approaches for full evidence integration, discussed as an analytical viewpoint. | 40m 11s | ||||||
| 12/22/25 | Communication for Scientists: A Discussion with Jenny Devenport | In this episode of "In the Interim…", Dr. Jenny Devenport, Global Head of Methods, Collaboration, and Outreach at Roche, joins Dr. Scott Berry for a detailed discussion on career evolution, statistical culture, and communication in the pharmaceutical industry. Dr. Devenport describes her transition from psychology in New Mexico to statistical leadership in Basel, emphasizing the formative role of early academic mentors and her experience working across the US and Europe. She outlines her current functions in methods development, internal collaboration, and industry outreach, highlighting active engagement with academic and regulatory communities. The episode scrutinizes differences in workplace culture, such as the emphasis on debate and long-term collaboration in Europe, and differences in educational backgrounds among statisticians. The conversation covers practical barriers and slow adoption of Bayesian methods and the importance of communication in the acceptance of futility analyses in pharma, the importance of scale in problem-solving, and the emergence of AI as a tool for statisticians. Dr. Devenport provides pragmatic strategies for statisticians to improve their influence through tailored, audience-specific communication.Key HighlightsDr. Devenport’s academic and geographic move from the US to EuropeResponsibilities in methods development, collaboration, and outreach at RocheContrasts in US and European pharmaceutical statistics culturesMeasured perspective on AI’s effect on statisticians’ responsibilitiesPractical guidance for statisticians on communication and influence | 39m 37s | ||||||
| 12/15/25 | Navigating the Arena: Platform Trials | In this episode of "In the Interim…", Dr. Scott Berry delivers a metaphoric critique of single-question trial infrastructure through the sports arena analogy, illustrating the cost, patient burden, and data inefficiency of conventional clinical trials. He provides a methodical comparison of traditional trial models and the platform trial approach, clarifying distinctions between platform, basket, and master protocol structures. Through examples from HEALEY ALS, I-SPY 2, PALM (Ebola), REMAP-CAP, RECOVERY, EPAD, GBM AGILE, and Precision Promise, Scott outlines the measurable efficiencies of platform trials: shared control arms, flexible arm addition and removal, reduced placebo exposure, accelerated timelines, and improved statistical inferences. The episode further examines platform trial performance during the COVID-19 pandemic, highlighting trial adaptability, and the rapid generation of actionable evidence. Scott also addresses failure scenarios, focusing on EPAD Alzheimer’s as a cautionary case in platform sustainability, cost allocation, and initial funding barriers. Listeners will gain a perspective on the operational and statistical design choices governing today’s most innovative clinical studies.Key HighlightsArena analogy applied to delineate clinical research inefficiency.Operational, statistical, and patient-focused efficiencies in platform versus single-question trials.Precision in terminology: platform, basket, and master protocol definitions.Effects of platform trials on speed and scientific rigor.Factors underlying both platform trial successes and failures.For more, visit us at https://www.berryconsultants.com/ | 50m 27s | ||||||
| 12/8/25 | Jumping Hurdles: Interim Analyses for Funding Decisions | In episode 40 of "In the Interim…", Dr. Scott Berry examines the statistical, operational, and behavioral challenges of using interim analyses as triggers for funding in adaptive and seamless Phase II/III clinical trials. The episode presents a typical hypothetical scenario for rare disease drug development, contrasting conventional two-stage development with a seamless design and highlighting efficiency gains in sample size, patient allocation, and trial duration. Scott details the construction of administrative (financial) interim analyses, underscoring their distinction from futility analyses and their role in funding decisions when complete funding is not secured upfront. He addresses FDA operational bias concerns, emphasizing blinding and limiting information sharing to protect trial integrity. Finally, the episode focuses on developing objective interim funding criteria—using Bayesian predictive probability and assurance—and on leveraging illustrative simulation outputs and sample datasets to bridge the “I’ll know it when I see it” divide between scientists and funders. Practical, empirical, and tailored to real funding barriers in clinical research.Key HighlightsStatistical structure and efficiency of seamless Phase II/III trial designsAdministrative (financial) interim analysis setup as funding decision triggers, distinct from futility analysesFDA operational bias guidance and requirements for trial blindingPredictive probability and assurance as objective interim criteriaSample data and simulation outputs to facilitate stakeholder alignmentFor more, visit us at https://www.berryconsultants.com/ | 42m 19s | ||||||
| 12/1/25 | Discussion with Kaspar Rufibach | In this episode of "In the Interim...", Dr. Scott Berry interviews Dr. Kaspar Rufibach, Co-Head of Advanced Biostatistical Sciences at Merck. The conversation tracks Rufibach’s evolution from academic training in actuarial and mathematical statistics through cancer research collaborations, postdoctoral work, and academic consulting, leading to applied roles in Roche and Merck. Discussion centers on methodological rigor, pragmatic approaches to assurance and predictive probability, and real-world experience in drug development. Rufibach examines the organizational integration of quantitative disciplines at Merck—incorporating pharmacology, real-world data, statistics, programming, and data science—while remaining candid on the role and boundaries of AI in current pharmaceutical practice.Key HighlightsStatistical education in Switzerland, bridging theory and early applied cancer trial experienceMove from academic consulting to a trial statistician role at Roche, emphasizing structured problem-solving in drug developmentApproach to predictive probability and assurance, balancing Bayesian and frequentist tools with strict emphasis on practicalityFormation of professional special interest groups with EFSPI and PSI, stepping in to address unmet community needs rather than seeking formal leadershipPerspective on Merck’s unified quantitative department, designed to remove silos and leverage interdisciplinary expertiseCautious view of AI as a complement to specific tasks, but not yet a replacement for nuanced clinical trial design or regulatory-facing strategiesCurrent focus on expanding causal inference methods and multi-state modeling for improved trial efficiency and evidence synthesisFor more, visit us at https://www.berryconsultants.com/ | 47m 18s | ||||||
| 11/24/25 | Bayesian Statistics in Clinical trials: The Past, Present, and Future | In this episode of "In the Interim…" guest host Cooper Berry moderates a detailed discussion on the evolution and practice of Bayesian methodology in clinical trials with fellow family members Dr. Don Berry, Dr. Scott Berry, Dr. Lindsay Berry, and Dr. Nick Berry. The panel outlines the foundational principles of Bayesian decision-making in medical research, ethical debates informed by historical reports like the Belmont Report, and the shift in regulatory acceptance. Computational developments such as Markov Chain Monte Carlo (MCMC) are examined for their role in enabling applied Bayesian models. Panelists give practical accounts of implementing adaptive and platform trials, including I-SPY 2 and REMAP-CAP, and analyze challenges faced during the COVID-19 pandemic. The implications of Bayesian statistics in artificial intelligence and contemporary clinical decision-making are explored, highlighting ongoing shifts in trial design and evidence synthesis. Each discussion is grounded in direct experience and technical rigor, providing insight into both the operational realities and future trajectory of Bayesian-driven methods in clinical research.Key Highlights:Historical development of Bayesian clinical trial design and foundational influence from Leonard J. Savage to current methodsEthical tension in trial conduct, referencing the Belmont Report and equipoiseAdvances in computation and Markov Chain Monte Carlo (MCMC)Regulatory frameworks for Bayesian adaptive trials, including FDA guidanceImplementation details from I-SPY 2 and REMAP-CAP platform trialsBayesian methodology in the context of artificial intelligence, precision medicine, and future data integrationFor more, visit us at https://www.berryconsultants.com/ | 1h 07m 17s | ||||||
| 11/17/25 | A Visit with Stroke Neurologist Dr. Jeff Saver | In episode 37 of "In the Interim…", Dr. Jeff Saver, Director of the UCLA Comprehensive Stroke and Vascular Neurology Program, details his shift from behavioral neurology to clinical stroke research after early engagement with multicenter trials like TOAST. The discussion covers the biology of acute ischemic stroke, quantifying neuronal loss, and the scientific underpinnings of “time is brain.” Dr. Saver outlines the evolution of endovascular therapy, from early device challenges to current reperfusion success rates exceeding 85%. Key methodological issues in stroke trial analyses are presented, including debate over endpoint selection—dichotomous versus ordinal approaches and the limitations therein. Special focus is placed on the utility-weighted modified Rankin Scale, which assigns empirically derived, patient-centered health values to each disability state, providing a comprehensive measure that captures both benefit and harm. The episode explores regulatory hesitancy, differing analytic preferences within the field, and the design prospects for neuroprotectant interventions. Heterogeneity in patient outcomes and implications for public health and trial methodology are addressed. The episode provides an empirical account of clinical trial endpoint selection, interpretation, and future directions in cerebrovascular research.Key HighlightsEarly career influences and pivotal trial participation.Pathophysiology and quantification of acute stroke injury.Endovascular device development and clinical impact.Comparative analysis of endpoint methods: dichotomous, ordinal, and utility-weighted approaches.Technical derivation and application of utility-weighted mRS.Ongoing regulatory and methodological debate.Heterogeneity in ischemic vulnerability and future trial directions.For more, visit us at https://www.berryconsultants.com/ | 36m 59s | ||||||
| 11/10/25 | The Saga of the Lecanemab Adaptive Phase II Trial | In Episode 36 of "In the Interim…", Dr. Scott Berry and Dr. Don Berry analyze the Phase II trial of Lecanemab (BAN2401) in Alzheimer’s disease, focusing on the application of adaptive Bayesian methods following persistent failures in Alzheimer’s drug development. The conversation covers the specific design features of five active arms, response adaptive randomization, and a longitudinal Bayesian model driving interim decisions, as well as direct operational and statistical challenges encountered during the trial. The hosts address regulatory proceedings, critique from "experts" regarding adaptive methods on noisy cognitive endpoints, and the direct alignment of the trial’s Bayesian 18-month efficacy estimates with the subsequent Phase III results and regulatory approvals.Key HighlightsAlzheimer’s drug development context: Widespread Phase III failures prompted a retreat from conventional trial designs and a demand for greater rigor and adaptability.Lecanemab Phase II methodology: Five active arms, two dosing schedules, response adaptive randomization, and adaptive interim analyses at every 50 patients enabled real-time adjustment and efficient dose evaluation.Bayesian modeling and imputation: Use of a longitudinal model to address missing data, forecast 12- and 18-month outcomes, and inform both allocation and stopping criteria.Operational adaptations: The design accommodated unplanned safety restrictions, such as stratified randomization for APOE4-positive participants after ARIA signals.Expert skepticism: Addressed Paul Aisen’s concerns about adapting to noisy interim cognitive data, emphasizing safeguards against erroneous stopping or success.Regulatory outcome: The 18-month efficacy estimates from Bayesian modeling during Phase II matched Phase III findings; FDA granted accelerated approval based on amyloid reduction and later full approval after Phase III confirmation.For more, visit us at https://www.berryconsultants.com/ | 51m 46s | ||||||
Showing 25 of 60
Sponsor Intelligence
Sign in to see which brands sponsor this podcast, their ad offers, and promo codes.
Chart Positions
16 placements across 15 markets.
Chart Positions
16 placements across 15 markets.

