ArXiv TLDR

Sketching the Readout of Large Language Models for Scalable Data Attribution and Valuation

🐦 Tweet
2604.16197

Yide Ran, Jianwen Xie, Minghui Wang, Wenjin Zheng, Denghui Zhang + 2 more

cs.LG

TLDR

RISE is a scalable method for data attribution and valuation in LLMs, using influence hotspots at the output layer and CountSketch projections.

Key contributions

  • Introduces RISE, a scalable method for data attribution and valuation in large language models.
  • Focuses on output layer influence hotspots, using a decomposed gradient and dual-channel representation.
  • Achieves strong compression with CountSketch projections, reducing index storage by up to 112x.
  • Scales to 32B parameter LLMs, where gradient-based methods become memory-infeasible.

Why it matters

Existing gradient-based methods for data attribution and valuation don't scale to large language models. RISE offers a practical and scalable solution, enabling better understanding of data-model synergy. It improves training data selection and helps detect issues like backdoors, crucial for modern LLM development.

Original Abstract

Data attribution and valuation are critical for understanding data-model synergy for Large Language Models (LLMs), yet existing gradient-based methods suffer from scalability challenges on LLMs. Inspired by human cognition, where decision making relies on a focused readout of relevant memories rather than replaying all pathways, we introduce RISE (Readout Influence Sketching Estimator). Instead of computing and indexing gradients across the entire LLM, RISE focuses on influence hotspots at the output layer, where influence signals concentrate, and the gradient admits a decomposed outer-product form. This enables a dual-channel representation combining a lexical residual channel (RH) and a semantic projected-error channel (GH). Applying CountSketch projections to these channels achieves strong compression while maintaining accurate attribution. Across the OLMo (1B-32B) and Pythia (14M-6.9B) families, RISE reduces index storage by up to 112$\times$ compared to RapidIn and scales to 32B parameters LLM, where gradient-based baselines such as RapidIn and ZO-Inf become memory-infeasible. We evaluate RISE on two paradigms: (1) retrospective attribution, retrieving influential training examples for specific predictions, and (2) prospective valuation, scoring candidate data utility zero-shot. We validate RISE on three tasks: Howdy backdoor data detection, Finance-Medical domain separation, and Brain Rot high-quality data selection. In a closed-loop Brain Rot study, continued pretraining on RISE-selected data yields consistent downstream improvements. Overall, RISE provides a practical and scalable primitive for influence analysis and training-data selection in modern large language models.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.