Master Kubernetes by deploying and operating real LLM-powered applications. From your first container calling Gemini to production-grade Helm charts with autoscaling, every chapter builds a working AI service on your own vCluster. No GPUs needed — all LLM interaction uses hosted model APIs (Gemini, OpenAI, Anthropic) through proxy services, exactly like production GenAI systems.
Python essentials and development environment for agent development
Virtual environments, async programming, type hints, Pydantic, error handling, testing, debugging, logging, project structure
Core LLM concepts: API clients, token economics, caching, and function calling basics
LLM APIs, OpenAI/Anthropic/Gemini clients, prompt caching, token economics, function calling basics