Artificial Intelligence

Chain-of-Thought

A prompting technique where the model is encouraged to show its step-by-step reasoning process before arriving at a final answer. This improves accuracy on complex reasoning tasks.

Why It Matters

Chain-of-thought prompting can dramatically improve LLM performance on math, logic, and multi-step reasoning tasks — often the difference between wrong and right answers.

Example

Prompting: 'If a store has 23 apples and sells 17, then receives 12 more, how many apples? Think step by step.' The model reasons: 23-17=6, 6+12=18.

Think of it like...

Like showing your work on a math test — working through each step makes you less likely to make mistakes and helps identify where errors might creep in.

Related Terms