
Insights from recent episode analysis
Audience Interest
Podcast Focus
Publishing Consistency
Platform Reach
Insights are generated by CastFox AI using publicly available data, episode content, and proprietary models.
Total monthly reach
Estimated from 3 chart positions in 3 markets.
By chart position
- 🇮🇳IN · Courses#1171K to 10K
- 🇭🇰HK · Courses#563K to 10K
- 🇦🇪AE · Courses#753K to 10K
- Per-Episode Audience
Est. listeners per new episode within ~30 days
2.1K to 9K🎙 Daily cadence·40 episodes·Last published 2w ago - Monthly Reach
Unique listeners across all episodes (30 days)
7K to 30K🇮🇳33%🇭🇰33%🇦🇪33% - Active Followers
Loyal subscribers who consistently listen
3.9K to 17K
Market Insights
Platform Distribution
Reach across major podcast platforms, updated hourly
Total Followers
—
Total Plays
—
Total Reviews
—
* Data sourced directly from platform APIs and aggregated hourly across all major podcast directories.
On the show
Recent episodes
EP 39 | CME295 in 15 Minutes (The Full Recap)
Apr 22, 2026
Unknown duration
EP 29 | CS25 in 10 Minutes (The Full Recap)
Mar 26, 2026
Unknown duration
EP 21 | Duke ML for PMs in 10 Minutes (The Full Recap)
Mar 2, 2026
Unknown duration
EP 15 | CS21SI: AI for Social Good in 10 Minutes (The Full Recap)
Feb 6, 2026
Unknown duration
CS124 in 7 minutes
Jan 20, 2026
Unknown duration
Social Links & Contact
Official channels & resources
Official Website
Login
RSS Feed
Login
| Date | Episode | Description | Length | |
|---|---|---|---|---|
| 4/22/26 | ![]() EP 39 | CME295 in 15 Minutes (The Full Recap) | Short on time? We’ve distilled the entire Stanford CME295 course into a single, high-energy video recap. This "Cram Session" takes you on a complete journey from the absolute basics of natural language processing to the cutting edge of Large Language Models.Watch or listen for the "Best Of" our course deep dives:The Foundation: Moving past RNNs into the Self-Attention revolution and the core Transformer architecture.The Training Pipeline: The massive undertaking of Pre-training, Supervised Fine-Tuning (SFT), and Preference Tuning to build a safe assistant.Reasoning & Agents: How models use Chain of Thought to solve multi-step problems , and how RAG and Tool Calling turn them into autonomous agents.The Future: A look at what's next, including Vision Transformers (ViT), Diffusion LLMs, and highly capable Small Language Models (SLMs).Note: This is an AI-generated study resource created via NotebookLM based on the Stanford CME295 curriculum and personal study notes. | — | |
| 3/26/26 | ![]() EP 29 | CS25 in 10 Minutes (The Full Recap) | Short on time? We’ve distilled the entire Stanford CS25: Transformers United course into a single 10-minute video. This "Cram Session" covers the journey of the Transformer architecture from a text-processing breakthrough to the engine driving the future of biology, visual media, and reasoning.Watch or listen for the "Best Of" our course deep dives:The Architecture: How Attention mechanisms and parallelization changed everything.Beyond Next-Word Prediction: Reinforcement Learning and the pursuit of AGI.Transformers in the Wild: Applying the architecture to Diffusion Models (image generation) and the Biology of LLMs (DNA/proteins).Modeling Reality: World Models for medicine and mastering time for video generation.Note: This is an AI-generated study resource created via NotebookLM based on Stanford’s CS25 curriculum and personal study notes. | — | |
| 3/2/26 | ![]() EP 21 | Duke ML for PMs in 10 Minutes (The Full Recap) | Short on time? We’ve distilled the entire Duke University "Machine Learning Foundations for Product Managers" course into a single 10-minute recap. This is the ultimate PM "Cram Session" for bridging the gap between business strategy and data science.Watch or listen for the "Best Of" our course deep dives:The Core Vocabulary: Features, labels, and the 3 types of ML.The Modeling Process: The 5 strategic steps to get a model into production.Model Evaluation: The Precision vs. Recall trade-off and surviving the "Accuracy Trap."Model Selection: Knowing when to use Linear Models versus Tree Models.Note: This is an AI-generated study resource created via NotebookLM based on Duke University’s ML for Product Managers curriculum and personal study notes. | — | |
| 2/6/26 | ![]() EP 15 | CS21SI: AI for Social Good in 10 Minutes (The Full Recap) | Short on time? We’ve distilled the entire Stanford CS21SI: AI for Social Good course into a single 10-minute video. This "Cram Session" covers the journey from ethical frameworks to technical execution across four major domains of social impact.Watch to see the "Best Of" our course deep dives:Education: The Minerva High School case study and the pitfalls of "Magic Box" metrics.Environment: Deep Learning vs. the Wildfire crisis and the carbon cost of AI.Information: NLP, Transformers, and the battle against automated disinformation.Accessibility & Conservation: Computer Vision for healthcare and Reinforcement Learning for wildlife protection.This video is the ultimate summary of our "For AI, By AI, To AI" experiment, synthesized via NotebookLM and Gemini from Stanford's open-source curriculum.Note: This is an AI-generated study resource. | — | |
| 1/20/26 | ![]() CS124 in 7 minutes | A quick and easy way to understand CS124 in a quick video byte. Generated by Google NotebookLM using my personal class notes. | — | |
| 1/20/26 | ![]() CS221 in 7 minutes | A quick video bite on CS221 and what the whole course is about - generated by Google NotebookLM from my class notes. | — |
Showing 6 of 6
Sponsor Intelligence
Sign in to see which brands sponsor this podcast, their ad offers, and promo codes.
Chart Positions
3 placements across 3 markets.
Chart Positions
3 placements across 3 markets.
