Git Workflows for AI Teams
Learning Path
Hands-on Labs
Each objective has a coding lab that opens in VS Code in your browser
Implement trunk-based development for AI projects
You will set up a Git branching strategy optimized for AI/ML projects on GKE. Configure a trunk-based workflow where main is always deployable. Create short-lived feature branches with naming conventions: feature/add-prompt-template, fix/embedding-pipeline, config/model-parameters. Implement branch lifecycle: create from main, push commits, open PR, squash merge back to main, delete branch. Configure git to use rebase by default (git config pull.rebase true). Demonstrate the workflow end-to-end with a prompt template change.
Configure branch protection and status checks
You will configure GitHub branch protection rules for production safety. Protect the main branch: require at least 1 approving review, require status checks to pass (lint, test, type-check), require branches to be up to date before merging, and prevent force pushes. Configure CODEOWNERS file: assign AI engineers to /prompts/** and /model-configs/**, platform engineers to /k8s/** and /helm/**. Set up required status checks that run in GKE pods: pytest, mypy, ruff lint. Verify protection by attempting a direct push to main (should fail).
Build PR templates for prompt and model changes
You will create pull request templates tailored for AI artifact changes. Build a .github/pull_request_template.md with sections: Change Type (prompt/model-config/pipeline/infrastructure), Description, Testing (evaluation results, before/after metrics), Rollback Plan. Create a specialized template for prompt changes: .github/PULL_REQUEST_TEMPLATE/prompt_change.md that includes fields for prompt version, target model (OpenAI GPT-4/Gemini), evaluation dataset, and accuracy delta. Implement a PR checklist: evaluation ran, no regression in accuracy, reviewed by domain expert, rollback tested.
Manage merge conflicts in AI data files
You will handle merge conflicts specific to AI projects. Practice resolving conflicts in JSONL training data files where two branches add different training examples — use git merge with a custom merge driver that appends both additions. Handle conflicts in JSON prompt template files by implementing a custom merge strategy using .gitattributes: *.jsonl merge=union for additive data files. Practice resolving conflicts in Kubernetes manifests (YAML) where two branches modify different sections. Build a pre-merge validation hook that runs JSON/YAML lint on conflicted files after resolution.
Implement pre-commit hooks and automated dependency updates
You will build code quality automation combining pre-commit hooks with Renovate for dependency management. Pre-commit hooks: configure .pre-commit-config.yaml with hooks for Python (ruff format, ruff check, mypy), YAML validation (yamllint for prompts and configs), JSON schema validation (for prompt templates and eval datasets using Pydantic model export), and secret detection (detect-secrets to prevent API key commits). Run pre-commit install to activate hooks, then test: commit a file with a hardcoded OpenAI key — verify it's blocked. Renovate setup: deploy Renovate as a GKE CronJob that scans your repository and creates PRs for dependency updates. Configure renovate.json: group Python packages by type (AI SDKs: openai, google-generativeai; frameworks: fastapi, pydantic; testing: pytest, deepeval), set auto-merge for patch versions, require manual review for major/minor. Renovate creates atomic PRs with changelogs and compatibility notes. Compare: Renovate (on-cluster, configurable) vs Dependabot (GitHub-native, simpler). Track: PRs created per week, auto-merge rate, time-to-update for security patches.
Version AI artifacts with Git tags and releases
You will implement a versioning strategy for AI artifacts using Git tags. Create semantic version tags for prompt releases: v1.0.0-prompts, v1.1.0-prompts. Build a tagging convention for model configurations: model-config/gpt4-v2, model-config/gemini-v1. Create GitHub Releases that bundle prompt templates, model configs, and evaluation results as release assets. Build a CHANGELOG.md automation: generate changelog entries from conventional commit messages (feat:, fix:, config:). Implement a release checklist script that verifies all evaluation tests pass before allowing a tag to be created.