Towards Apples to Apples for AI Evaluations: From Real-World Use Cases to Evaluation Scenarios
Yee-Yin Choong, Kristen Greene, Alice Qian, Meryem Marasli, Ziqi Yang + 4 more
TLDR
A human-centered process transforms real-world AI use cases into consistent evaluation scenarios, enabling "apples-to-apples" comparisons.
Key contributions
- Advocates for methodological transparency, operational grounding, and human-centered design in AI evaluations.
- Proposes an "AI Use Case Worksheet" and process to transform high-level use cases into detailed scenarios.
- Demonstrates utility in financial services, generating 107 scenarios via LLM prompting and iterative human review.
- Integrates human checkpoints and a validation rubric to ensure scenario quality and real-world operational grounding.
Why it matters
Current AI evaluations often lack consistency, leading to incomparable results. This work provides a structured, human-centered framework to create robust, real-world evaluation scenarios, ensuring AI systems are assessed fairly and relevantly.
Original Abstract
AI measurement science has a wide variety of methodologies and measurements for comparing AI systems, resulting in what often appear to be "apples-to-oranges" comparisons across AI evaluations. To move toward "apples-to-apples" comparisons in real-world AI evaluations, this work advocates for methodological transparency in evaluation scenarios, operational grounding, and human-centered design (HCD) principles. We propose a repeatable process for transforming high-level use cases to detailed scenarios by eliciting use cases from subject matter experts (SMEs) via a structured AI Use Case Worksheet with six key elements: use case, sector, user (direct and indirect), intended outcomes, expected impacts (positive and negative), and KPIs and metrics. We demonstrate utility of the worksheet and process in the U.S. financial services sector. This paper reports on example high-level AI use cases identified by financial services sector SMEs: cyber defense enablement, developer productivity, financial crime aggregation, suspicious activity report (SAR) filing, credit memo generation, and internal call center support. These AI use cases provided are illustrative of the process and not exhaustive. Central to our work is a three-stage expansion pipeline combining LLM prompting with human reviews to generate 107 scenarios from those use cases elicited from SMEs. This process integrates iterative human reviews at every juncture to ensure operational grounding: for scenario titles and descriptions; for core scenario elements like users, benefits and risks, and metrics; and for scenario narratives and evaluation objectives. Human checkpoints ensure scenarios remain reflective of real-world usage and human needs. We describe a validation rubric to assess scenario quality. By defining key scenario components, this work supports a more consistent and meaningful paradigm for human-centered AI evaluations.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.