Chapter 1

Containerizing LLM Applications

Docker fundamentalsDockerfiles and multi-stage buildsPython LLM app containerizationenvironment variables for configurationDocker Compose for multi-service appscontainer registries and image tagging

Learning Path

Hands-on Labs

Each objective has a coding lab that opens in VS Code in your browser

Objective 1

Write a Python app that calls the Gemini API and returns structured responses

Goal

Build a Python CLI application that sends prompts to the Gemini API via the platform proxy and formats the response. This is the app you will containerize throughout the chapter.

Objective 2

Write a Dockerfile and build a container image for the LLM app

Goal

Create a Dockerfile that packages the Python LLM application with all its dependencies. Learn layer caching, .dockerignore, and choosing the right base image.

Objective 3

Run the containerized LLM app with environment-based configuration

Goal

Run the Docker container, passing the Gemini proxy URL and other settings via environment variables. Verify the app calls the LLM and returns results from inside the container.

Objective 4

Use Docker Compose to run the LLM app with supporting services

Goal

Define a Docker Compose file that runs the LLM app alongside a proxy service and a Redis cache. Learn service networking and dependency ordering in Compose.

Objective 5

Tag images with semantic versions and push to a container registry

Goal

Learn image tagging strategies (latest, semver, git-sha) and push your LLM app image to a container registry so Kubernetes can pull it.

Objective 6

Debug containers with exec, logs, and inspect

Goal

Learn essential Docker debugging commands: exec into running containers, read logs, inspect image layers, and troubleshoot common issues like missing dependencies or wrong entrypoints.