# Article Series & Collections Curated multi-part series that guide you through related topics in a structured sequence. <div class="callout" data-callout="info"> <div class="callout-title">About Series</div> <div class="callout-content"> Series are multi-part articles designed to be read in sequence. Each part builds on previous articles, providing a structured learning path through complex topics. </div> </div> --- ## How Series Work When an article is part of a series, it includes metadata: - `series: "Series Name"` - The collection it belongs to - `part: N` - Its position in the sequence Series articles link to previous and next parts for easy navigation. --- ## Active Series ### DGX Lab Chronicles **Status:** In Progress (5 parts published) 🚀 **Level:** Beginner → Intermediate **Total Time:** 58 minutes (so far) Real-world AI experiments on NVIDIA DGX hardware, documenting the journey of building production ML infrastructure from shell optimization to intelligent routing, RAG systems, performance benchmarking, and delegation patterns for complex ML experiments. **Published Parts:** - [[dgx-lab-intelligent-gateway-heuristics-vs-ml-day-1|Day 1: When Simple Heuristics Beat ML by 95,000x]] (14 min) - Building an intelligent AI gateway that routes requests 95,000x faster than ML while maintaining 90% accuracy - [[dgx-lab-supercharged-bashrc-ml-workflows-day-2|Day 2: Supercharge Your Shell with 50+ ML Productivity Aliases]] (10 min) - Transform your shell into a productivity powerhouse—save 900 keystrokes and 20 minutes daily - [[dgx-lab-building-complete-rag-infrastructure-day-3|Day 3: Building a Complete RAG Infrastructure]] (10 min) - Deploy production-grade RAG infrastructure with Qdrant, AnythingLLM, and proper Docker networking - [[dgx-lab-benchmarks-vs-reality-day-4|Day 4: When Benchmark Numbers Meet Production Reality]] (10 min) - NVIDIA's DGX Spark benchmarks vs. 6 days of intensive ML workloads—what they don't tell you about GPU inference failures and memory fragmentation - [[medical-llm-fine-tuning-70-to-92-percent|Day 6: How I Delegated a 9-Day Medical AI Experiment]] (14 min) - Learn when to delegate and when to intervene in complex ML projects, turning 70% accuracy into 92.4% through strategic decision-making **Coming Next:** - Day 5: Fine-Tuning at Scale - Day 7: Production Deployment Patterns --- ### Building a Production ML Workspace on GPU Infrastructure **Status:** Complete (5 of 5 parts published) ✅ **Level:** Beginner → Intermediate **Total Time:** 51 minutes Learn how to build production-ready ML workspaces on GPU infrastructure, covering workspace organization, documentation systems, experiment tracking, agent templates, and team collaboration workflows. **All Parts:** - [[building-production-ml-workspace-part-1-structure|Part 1: Designing an Organized Structure]] (8 min) - Create a scalable workspace structure for Ollama models, fine-tuning, agents, and experiments - [[building-production-ml-workspace-part-2-documentation|Part 2: Documentation Systems That Scale]] (7 min) - Build a three-tier documentation system for debugging, review, and knowledge sharing - [[building-production-ml-workspace-part-3-experiments|Part 3: Experiment Tracking and Reproducibility]] (12 min) - Master experiment tracking with MLflow and implement reproducible workflows - [[building-production-ml-workspace-part-4-agents|Part 4: Production-Ready AI Agent Templates]] (10 min) - Build production-ready AI agents with standardized templates and comprehensive testing - [[building-production-ml-workspace-part-5-collaboration|Part 5: Team Collaboration and Workflow Integration]] (14 min) - Complete your workspace with team collaboration patterns and workflow automation **Learning Path:** [[reading-paths#Path 7 GPU ML Development|Path 7: GPU ML Development]] --- ## Upcoming Series We're planning several article series. Check back soon or follow for updates: ### Planned Series **Building Production AI Agents** (Planned) - Part 1: Agent Architecture Fundamentals - Part 2: Tool Integration and MCP - Part 3: Memory and Context Management - Part 4: Production Deployment and Monitoring **LLM Development Mastery** (Planned) - Part 1: Prompt Engineering Foundations - Part 2: Advanced Prompt Patterns - Part 3: Fine-tuning and Optimization - Part 4: Evaluation and Quality Assurance **AI Systems Architecture** (Planned) - Part 1: Designing RAG Systems - Part 2: Multi-Agent Orchestration - Part 3: Scalability and Performance - Part 4: Production Best Practices --- ## Related Collections While we're building out formal series, you can explore related articles through: - [[reading-paths|Learning Paths]] - Curated sequences through existing articles - [[index/by-topics|Topics]] - Related articles grouped by subject - [[index/by-tag|Tags]] - Discover articles by specific technologies --- ## Creating a Series As the blog grows, articles will be organized into series. To follow a series: 1. Start with Part 1 2. Follow the "Next in Series" links at the end of each article 3. Complete all parts in sequence for full understanding <div class="callout" data-callout="tip"> <div class="callout-title">Want to Suggest a Series?</div> <div class="callout-content"> If you'd like to see a particular series created, reach out via Twitter [@bioinfo](https://twitter.com/bioinfo) or through the blog. </div> </div> --- ## Quick Navigation - [[⌂ Home|Back to Home]] - [[index/by-tag|Browse by Tag]] - [[by-date|Browse by Date]] - [[index/by-difficulty|Browse by Difficulty]] - [[index/by-topics|Browse by Topics]] - [[reading-paths|Learning Paths]]