LATTICE: Evaluating Decision Support Utility of Crypto Agents
Aaron Chan, Tengfei Li, Tianyi Xiao, Angela Chen, Junyi Du + 1 more
TLDR
LATTICE is a new benchmark using LLM judges to evaluate the decision support utility of real-world crypto agents across 16 tasks and six dimensions.
Key contributions
- Introduces LATTICE, a benchmark for crypto agent decision support.
- Defines 6 evaluation dimensions and 16 task types for comprehensive assessment.
- Uses LLM judges for scalable, auditable, and automatic scoring of agent outputs.
- Evaluates 6 real-world crypto copilots, revealing performance trade-offs.
Why it matters
This paper introduces LATTICE, a novel benchmark that uniquely focuses on the decision support capabilities of crypto agents, moving beyond traditional reasoning- or outcome-based evaluations. By using LLM judges and evaluating production-level copilots, it offers a scalable and realistic assessment of agent utility in real-world scenarios, highlighting important performance trade-offs for users.
Original Abstract
We introduce LATTICE, a benchmark for evaluating the decision support utility of crypto agents in realistic user-facing scenarios. Prior crypto agent benchmarks mainly focus on reasoning-based or outcome-based evaluation, but do not assess agents' ability to assist user decision-making. LATTICE addresses this gap by: (1) defining six evaluation dimensions that capture key decision support properties; (2) proposing 16 task types that span the end-to-end crypto copilot workflow; and (3) using LLM judges to automatically score agent outputs based on these dimensions and tasks. Crucially, the dimensions and tasks are designed to be evaluable at scale using LLM judges, without relying on ground truth from expert annotators or external data sources. In lieu of these dependencies, LATTICE's LLM judge rubrics can be continually audited and updated given new dimensions, tasks, criteria, and human feedback, thus promoting reliable and extensible evaluation. While other benchmarks often compare foundation models sharing a generic agent framework, we use LATTICE to assess production-level agents used in actual crypto copilot products, reflecting the importance of orchestration and UI/UX design in determining agent quality. In this paper, we evaluate six real-world crypto copilots on 1,200 diverse queries and report breakdowns across dimensions, tasks, and query categories. Our experiments show that most of the tested copilots achieve comparable aggregate scores, but differ more significantly on dimension-level and task-level performance. This pattern suggests meaningful trade-offs in decision support quality: users with different priorities may be better served by different copilots than the aggregate rankings alone would indicate. To support reproducible research, we open-source all LATTICE code and data used in this paper.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.