Tree of Thoughts: Deliberate Problem Solving with Large Language Models
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths + 2 more
TLDR
Tree of Thoughts (ToT) is a novel inference framework that enables large language models to perform strategic, multi-step problem solving by exploring multiple reasoning paths and backtracking when needed.
Key contributions
- Introduces ToT, a generalization of Chain of Thought prompting that treats intermediate reasoning steps as nodes in a search tree.
- Enables language models to deliberate by exploring, evaluating, and backtracking over multiple thought sequences to improve decision making.
- Demonstrates significant performance gains on complex tasks requiring planning, such as Game of 24, Creative Writing, and Mini Crosswords.
Why it matters
This paper addresses a fundamental limitation of current language model inference—linear, token-by-token generation—by enabling more flexible, strategic reasoning through a tree-based search over intermediate thoughts. This advancement allows LMs to solve complex problems requiring lookahead and exploration, substantially improving their utility across diverse domains that demand deliberate problem solving.
Original Abstract
Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. To surmount these challenges, we introduce a new framework for language model inference, Tree of Thoughts (ToT), which generalizes over the popular Chain of Thought approach to prompting language models, and enables exploration over coherent units of text (thoughts) that serve as intermediate steps toward problem solving. ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices. Our experiments show that ToT significantly enhances language models' problem-solving abilities on three novel tasks requiring non-trivial planning or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in Game of 24, while GPT-4 with chain-of-thought prompting only solved 4% of tasks, our method achieved a success rate of 74%. Code repo with all prompts: https://github.com/princeton-nlp/tree-of-thought-llm.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.