A prompting technique where you ask the model to show its reasoning step by step before giving a final answer. Instead of jumping to a conclusion, the model "thinks out loud," which dramatically improves accuracy on complex tasks.
Why it matters
Asking "explain your reasoning" isn't just for transparency — it actually makes models smarter. CoT reduced math errors by up to 50% in early studies. Most modern models now do this internally.