GenAI Disciplines
12 career paths. Each maps real job responsibilities to hands-on courses that cover the ground.
Disciplines
GenAI Application Engineering
Build production RAG & prompt chain applications, design streaming chat UIs, implement guardrails & evaluation, optimize LLM inference costs, and deploy on Kubernetes.
Design and build production GenAI features
(chatbots, search, summarization) into web applications
How you build it
- →Build streaming chat UIs with FastAPI backends using SSE and WebSocket transports
- →Wire React frontends to LLM-powered APIs with end-to-end full-stack integration
- →Deploy complete AI applications from prototype to production on Kubernetes
Implement RAG pipelines
with vector databases for enterprise search and knowledge retrieval
How you build it
- →Build end-to-end RAG: document chunking → embedding generation → pgvector storage → LangGraph retrieval nodes
- →Validate retrieval accuracy using RAGAS metrics and implement self-verification loops
- →Benchmark chunking strategies and HNSW/IVFFlat index types against precision-recall tradeoffs
Optimize LLM inference
for latency, cost, and reliability across multiple providers
How you build it
- →Configure multi-provider routing with LiteLLM gateway including load balancing and failover
- →Implement semantic caching with Redis + embedding similarity to reduce costs by 40%+
- →Extract structured outputs with Pydantic AI and handle provider-specific error recovery
Integrate LLM APIs
(OpenAI, Gemini, Anthropic) into existing applications with error handling
How you build it
- →Connect to OpenAI, Anthropic, and Gemini APIs with streaming, function calling, and embeddings
- →Build FastAPI rate limiting middleware with exponential backoff and retry logic
- →Navigate provider contract differences across authentication, token limits, and response formats
Build AI agent features
with tool calling, function execution, and human-in-the-loop workflows
How you build it
- →Design LangGraph state machines with structured tool calling and JSON schema validation
- →Implement MCP tool integration for dynamic tool discovery and execution
- →Wire interruptible agent workflows with human approval gates and checkpoint persistence
Evaluate model outputs
using automated metrics and LLM-as-judge for production quality
How you build it
- →Build evaluation pipelines using RAGAS faithfulness/relevance metrics and DeepEval harnesses
- →Integrate LLM-as-judge scoring into CI/CD gates for automated quality control
- →Track quality metrics over time with Langfuse dashboards and regression detection
Deploy and containerize
GenAI applications on Kubernetes with CI/CD
How you build it
- →Containerize FastAPI + LLM applications with multi-stage Docker builds
- →Deploy to Kubernetes with Helm charts, readiness probes, and Ingress configuration
- →Automate rollouts with ArgoCD GitOps workflows and Kustomize environment overlays
Your Selection
No disciplines selected yet