AI Use Case Discovery & Data Readiness Assessment
Learning Path
Hands-on Labs
Each objective has a coding lab that opens in VS Code in your browser
Build UseCaseScoringEngine with weighted criteria evaluation using Instructor and Pydantic models
Build a UseCaseScoringEngine class that accepts use case descriptions and evaluates them against weighted criteria (business impact, technical feasibility, data readiness, time-to-value). Use Instructor with Pydantic response models to extract structured scores from LLM analysis. Implement POST /api/v1/scoring/evaluate endpoint that returns ranked use cases with confidence intervals and a GET /api/v1/scoring/criteria endpoint to manage scoring weights.
Implement DataReadinessProfiler that analyzes CSV/JSON datasets for quality metrics and PII detection with Presidio
Build a DataReadinessProfiler class that ingests CSV and JSON files and produces a readiness report covering schema completeness, null rates, cardinality, data volume, and PII exposure. Use Presidio for PII detection across common entity types (email, phone, SSN, names). Implement POST /api/v1/data-readiness/profile endpoint accepting file uploads and GET /api/v1/data-readiness/reports/{id} returning the structured assessment.
Build DiscoveryInterviewAgent using LangGraph with structured question flows and LLM-powered insight extraction
Build a DiscoveryInterviewAgent as a LangGraph graph with states for question_generation, response_capture, follow_up_analysis, and insight_extraction. The agent generates contextual follow-up questions based on previous answers and extracts structured insights (pain points, opportunities, constraints) using Pydantic models. Implement POST /api/v1/discovery/sessions to create interview sessions and POST /api/v1/discovery/sessions/{id}/respond to submit answers and receive follow-ups.
Implement ProviderFeasibilityAnalyzer that benchmarks use cases against OpenAI/Gemini/Anthropic capabilities via LiteLLM
Build a ProviderFeasibilityAnalyzer class that takes a use case description and runs it through OpenAI GPT-4o, Gemini 2.5 Flash, and Anthropic Claude via LiteLLM. Compare response quality, latency, token usage, and estimated cost. Implement POST /api/v1/feasibility/analyze endpoint that returns a ProviderComparison Pydantic model with per-provider scores and a recommendation. Use httpx for async provider health checks.
Build DiscoveryReportGenerator with Jinja2 templates producing structured markdown reports from assessment data
Build a DiscoveryReportGenerator class that aggregates outputs from the scoring engine, data readiness profiler, interview agent, and feasibility analyzer into a cohesive discovery report. Use Jinja2 templates for markdown formatting with sections for executive summary, use case rankings, data readiness heatmap, and provider recommendations. Implement POST /api/v1/reports/generate endpoint and GET /api/v1/reports/{id} for retrieval.