Tool Calling is Linearly Readable and Steerable in Language Models
Zekun Wu, Ze Wang, Seonglae Cho, Yufei Yang, Adriano Koshiyama + 2 more
TLDR
Researchers found that tool selection in LLMs is linearly readable and steerable, allowing for error prediction and correction before execution.
Key contributions
- Tool identity in LLMs is linearly readable and steerable, enabling intervention on tool selection.
- Mean-difference activation patching switches tool choice with 77-100% accuracy, generating correct JSON.
- Internal activation gaps predict tool-calling errors, flagging 14-21x more wrong calls before execution.
- Tool representations are formed during pretraining, with instruction tuning wiring them to the output.
Why it matters
This research offers a crucial method to debug and improve tool-calling agents by making their internal decisions transparent and steerable. It enables proactive error detection and correction, preventing costly mistakes. This work also provides insights into how LLMs represent tool knowledge.
Original Abstract
When a tool-calling agent picks the wrong tool, the failure is invisible until execution: the email gets sent, the meeting gets missed. Probing 12 instruction-tuned models across Gemma 3, Qwen 3, Qwen 2.5, and Llama 3.1 (270M to 27B), we find the identity of the chosen tool is linearly readable and steerable inside the model. Adding the mean-difference between two tools' average internal activations switches which tool the model selects at 77-100% accuracy on name-only single-turn prompts (93-100% at 4B+), and the JSON arguments that follow autoregressively match the new tool's schema, so flipping the name is enough. The same per-tool means also flag likely errors before they happen: on Gemma 3 12B and 27B, queries where the gap between the top-1 and top-2 tool is smallest produce 14-21x more wrong calls than queries with the largest gap. The causal effect concentrates along one direction, the row of the output layer that produces the target tool's first token: a unit vector along it at matched magnitude already reaches 93-100%, while what is left over leaves the choice almost untouched. Activation patching localises this to a small set of mid- and late-layer attention heads, and a within-topic probe across 14 same-domain $τ$-bench airline tools reaches top-1 61-89% across five 4B-14B models, ruling out the reading that we are just moving the model along a topic axis. Even base models encode the right tool before they can emit it: cosine readout from the internal state recovers 69-82% on BFCL while base generation reaches only 2-10%, suggesting pretraining forms the representation and instruction tuning later wires it to the output. We measure tool identity selection and JSON schema correctness in single-turn fixed-menu settings; multi-turn agentic transfer is more fragile and is discussed in Limitations.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.