All GenAI Disciplines
Intermediate20 Chapters

LLM Foundations

Deep understanding of LLM internals, data pipelines, architecture, and multi-provider integration. Covers transformer anatomy, inference optimization, and alternative architectures. Prepares Python developers for production agent development.

LLMTransformersTokenizationInferenceMulti-Provider

Learning Path

5 phases · 20 chapters
Phase 10/1 chapters

Intermediate Python

Advanced Python patterns for GenAI development

0/53 quiz questions
0/5 labs

Tools & Topics

Generators, async/await, type hints, Pydantic, data pipelines, HTTP clients

Goals

  • Build data processing pipelines with generators
  • Write async Python code for concurrent operations
  • Use type hints and Pydantic for validation
  • Work with data pipelines and transformations

Chapters

1. Generators & Iterators
2. Async Programming Basics
3. Type Hints & Pydantic
4. Data Pipelines & Transformations
5. HTTP Clients & httpx
Phase 20/0 chapters

LLM Fundamentals

Core LLM concepts and API integration

0/0 quiz questions
0/0 labs

Tools & Topics

First API call, tokenizer internals, context windows, inference phases

Goals

  • Make your first LLM API call
  • Understand tokenization and BPE algorithm
  • Manage context windows and KV-cache
  • Understand LLM architectures (Dense, MoE)

Chapters

6. Your First LLM Call
7. Tokenizer Internals
8. Context Windows, KV-Cache & Memory
9. LLM Architectures - Dense, MoE & KV-Cache Optimizations
10. LLM Inference - Prefill & Decode
Phase 30/0 chapters

LLM Architecture Deep Dive

Understanding model internals and alternatives

0/0 quiz questions
0/0 labs

Tools & Topics

Transformer layers, attention patterns, FFN variants, SSMs, hybrid models

Goals

  • Understand transformer layer anatomy
  • Analyze attention patterns and mechanisms
  • Compare FFN variants and activation functions
  • Explore SSMs and hybrid architectures

Chapters

11. Transformer Layer Anatomy
12. FFN Variants & Activation Functions
13. Alternative Architectures - SSMs & Hybrids
Phase 40/0 chapters

AI-Ready Python

Production-ready LLM integration patterns

0/0 quiz questions
0/0 labs

Tools & Topics

Sampling, prompt engineering, multi-provider

Goals

  • Control LLM outputs with sampling parameters
  • Apply prompt engineering techniques
  • Build multi-provider LLM integrations
  • Use Jinja2 for prompt templates

Chapters

14. Sampling Parameters & Output Control
15. Multi-Provider & Prompt Engineering
Phase 50/0 chapters

Production Readiness

Essential patterns for production LLM applications

0/0 quiz questions
0/0 labs

Tools & Topics

Function calling, embeddings, RAG, cost awareness, retry patterns

Goals

  • Implement function calling with LLM APIs
  • Build semantic search with embeddings
  • Create RAG pipelines for knowledge retrieval
  • Track and optimize token costs

Chapters

16. Function Calling Fundamentals
17. Embeddings & Semantic Search
18. RAG Fundamentals
19. Cost Awareness & Token Economics
20. Retry Patterns with Tenacity

© 2026 GenBodha. All rights reserved.