Chain of Thought

Last Updated: March 18, 2026
Share on:FacebookLinkedInX (Twitter)

Chain-of-Thought (CoT) is a reasoning approach that instructs AI models to think step-by-step and generate intermediate results before arriving at the final answer.

At-a-Glance

  • The term CoT became widely recognized after a 2022 research paper by Google researchers, Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, which suggested that prompting models with step-by-step reasoning examples improved the performance of LLMs.
  • Many AI providers now hide raw chain-of-thought from users, even when models internally reason before giving a response.
     

ELI5 (Explain Like I’m 5)

Imagine you ask 2 children, “If you have 2 apples and get 3 more, how many do you have?”.

One child says “5.” Another says, “I started with 2, then added 3, so the answer is 2+3, which equals 5.”

The second child is showing their thinking step by step. 

Usually, an AI tries to guess the answer instantly. When we use CoT, we are telling the AI, "Don't guess; show your reasoning”.

By forcing the model to write out the logic step-by-step, it prevents the AI from skipping over the details that lead to the correct conclusion, especially in tasks that involve math or logic. 

How Chain-of-Thought Works

When a model is prompted to think step by step, it doesn’t actually reason like a human. Instead, it generates a sequence of intermediate tokens that represent reasoning steps. Each new token is based on everything generated so far.

So instead of solving a problem in one jump, the model:

  • Generates a first reasoning step
  • Uses that step as context
  • Generates the next step
  • Repeats until it reaches a conclusion

This effectively creates a feedback loop through the context window, where earlier tokens guide later ones. The accuracy of the final answer improves because it is built incrementally.

Why CoT matters

Chain-of-thought is a foundational technique in modern AI systems. It is especially useful for:

  • Mathematical reasoning
  • Multi-step problem solving
  • Code generation and debugging
  • Decision-making

Limitations of CoT

  1. While CoT is powerful, it can increase the number of tokens used, which may lead to higher latency and costs. 
  2. If the model’s initial reasoning step is flawed, the subsequent steps will likely compound that error.
  3. Hallucinations are possible with LLMs. A model can generate convincing-looking reasoning that is inaccurate, incomplete, or fabricated.

While showing reasoning can improve transparency, many production systems now hide the full chain-of-thought to prevent misuse or unwanted verbosity in the final answer. 

Instead, they use internal reasoning, which can be occasionally seen on some LLM platforms when the model is thinking, but the final answer is concise and the reasoning steps aren’t shown.

Chain-of-thought doesn’t change the AI model; it changes how you talk to it.

Stop Overpaying for AI.

Access every top AI model in one place. Compare answers side-by-side in the ultimate BYOK workspace.

Get Started Free