All Topics

Prompt Engineering

16 episodes — 90-second audio overviews on prompt engineering.

Coding benchmarks — HumanEval, SWE-bench, MBPP
1:18

Coding benchmarks — HumanEval, SWE-bench, MBPP

Standard evaluations measuring code generation quality: from simple function completion (HumanEval) to resolving real GitHub issues (SWE-bench).

AI Code GenerationPrompt EngineeringGenerative AIGenAI Explained2026-02-19
Repository-level code understanding — beyond single files
1:29

Repository-level code understanding — beyond single files

Models that navigate imports, call graphs, type systems, and project structure to generate contextually correct changes spanning multiple files.

AI Code GenerationPrompt EngineeringGenerative AIGenAI Explained2026-02-19
Code execution feedback — running code to self-correct
1:37

Code execution feedback — running code to self-correct

Agents that generate code, execute it in a sandbox, read error messages, and iteratively fix bugs until all tests pass — closing the generate-test loop.

AI Code GenerationPrompt EngineeringGenerative AIGenAI Explained2026-02-19
Code generation from natural language — describing what you want
1:37

Code generation from natural language — describing what you want

Translating English descriptions into working functions, classes, and scripts — the core use case driving AI-assisted software development.

AI Code GenerationPrompt EngineeringGenerative AIGenAI Explained2026-02-19
Fill-in-the-middle (FIM) — bidirectional code completion
1:23

Fill-in-the-middle (FIM) — bidirectional code completion

Training models to predict missing code given both the prefix and suffix context, powering the inline autocomplete experience in editors like Copilot and Cursor.

AI Code GenerationPrompt EngineeringGenerative AIGenAI Explained2026-02-19
Code LLMs — models specialized for programming
1:35

Code LLMs — models specialized for programming

Codex, CodeLlama, StarCoder, DeepSeek Coder — models trained on massive code corpora that understand syntax, APIs, libraries, and programming patterns.

AI Code GenerationPrompt EngineeringGenerative AIGenAI Explained2026-02-19
Meta-prompting — LLMs writing better prompts
1:20

Meta-prompting — LLMs writing better prompts

Using one LLM to generate, evaluate, and iteratively optimize prompts for another model, automating the prompt engineering process itself.

Prompt EngineeringGenerative AIGenAI ExplainedAI Podcast2026-02-19
Prompt chaining — multi-step workflows across prompts
1:40

Prompt chaining — multi-step workflows across prompts

Decomposing complex tasks into sequential prompt calls where each step's output feeds as context into the next step's input.

Prompt EngineeringGenerative AIGenAI ExplainedAI Podcast2026-02-19
Structured output prompting — JSON and schema-constrained generation
1:19

Structured output prompting — JSON and schema-constrained generation

Techniques and instructions that force LLM output into machine-parseable formats for reliable downstream integration with software systems.

Prompt EngineeringGenerative AIGenAI ExplainedAI Podcast2026-02-19
Tree of Thoughts — branching solution exploration
1:24

Tree of Thoughts — branching solution exploration

The model generates multiple reasoning paths, evaluates each branch, and prunes bad directions — systematic search over the space of possible solutions.

Prompt EngineeringGenerative AIGenAI ExplainedAI Podcast2026-02-19
ReAct — interleaving reasoning with action
1:32

ReAct — interleaving reasoning with action

A prompting framework where the model alternates between thinking about what to do (Reason), taking actions (tool calls), and processing observations.

Prompt EngineeringGenerative AIGenAI ExplainedAI Podcast2026-02-19
Chain-of-thought (CoT) — step-by-step reasoning
1:27

Chain-of-thought (CoT) — step-by-step reasoning

Adding "Let's think step by step" or showing worked reasoning dramatically improves accuracy on math, logic, and multi-step problems.

Prompt EngineeringGenerative AIGenAI ExplainedAI Podcast2026-02-19
Zero-shot prompting — instructions without examples
1:40

Zero-shot prompting — instructions without examples

Relying entirely on the model's pre-trained knowledge and instruction tuning by providing only a clear, specific task description.

Prompt EngineeringGenerative AIGenAI ExplainedAI Podcast2026-02-19
Few-shot prompting — teaching by example in context
1:16

Few-shot prompting — teaching by example in context

Including 2-5 input/output examples directly in the prompt so the model infers the desired pattern and applies it to new inputs without any training.

Prompt EngineeringGenerative AIGenAI ExplainedAI Podcast2026-02-19
System prompts — persistent behavioral instructions
1:29

System prompts — persistent behavioral instructions

Hidden instructions prepended to every conversation turn that define persona, rules, output format, tool access, and behavioral boundaries.

Prompt EngineeringGenerative AIGenAI ExplainedAI Podcast2026-02-19
Prompt engineering — designing inputs for desired outputs
1:19

Prompt engineering — designing inputs for desired outputs

The practice of crafting structured prompts that reliably guide LLMs to produce accurate, well-formatted, and useful responses.

Prompt EngineeringGenerative AIGenAI ExplainedAI Podcast2026-02-19