ArXiv TLDR

CoLA: A Choice Leakage Attack Framework to Expose Privacy Risks in Subset Training

🐦 Tweet
2604.12342

Qi Li, Cheng-Long Wang, Yinzhi Cao, Di Wang

cs.CRcs.CV

TLDR

This paper introduces CoLA, a framework demonstrating that subset training can leak sensitive information about data selection, challenging privacy assumptions.

Key contributions

  • Introduces CoLA, a framework to analyze privacy leakage from data subset selection.
  • Defines two new privacy surfaces: Training-membership MIA and Selection-participation MIA.
  • Proposes two attack scenarios: Subset-aware Side-channel and Black-box Attacks.
  • Demonstrates that subset training introduces significant privacy risks, extending to the entire ML ecosystem.

Why it matters

This paper challenges the common belief that training on data subsets inherently reduces privacy risks. It reveals that the very process of selecting data can introduce new vulnerabilities, expanding the scope of privacy concerns beyond just the training set to the entire data-model supply chain. This work is crucial for developing more robust privacy-preserving ML practices.

Original Abstract

Training models on a carefully chosen portion of data rather than the full dataset is now a standard preprocess for modern ML. From vision coreset selection to large-scale filtering in language models, it enables scalability with minimal utility loss. A common intuition is that training on fewer samples should also reduce privacy risks. In this paper, we challenge this assumption. We show that subset training is not privacy free: the very choices of which data are included or excluded can introduce new privacy surface and leak more sensitive information. Such information can be captured by adversaries either through side-channel metadata from the subset selection process or via the outputs of the target model. To systematically study this phenomenon, we propose CoLA (Choice Leakage Attack), a unified framework for analyzing privacy leakage in subset selection. In CoLA, depending on the adversary's knowledge of the side-channel information, we define two practical attack scenarios: Subset-aware Side-channel Attacks and Black-box Attacks. Under both scenarios, we investigate two privacy surfaces unique to subset training: (1) Training-membership MIA (TM-MIA), which concerns only the privacy of training data membership, and (2) Selection-participation MIA (SP-MIA), which concerns the privacy of all samples that participated in the subset selection process. Notably, SP-MIA enlarges the notion of membership from model training to the entire data-model supply chain. Experiments on vision and language models show that existing threat models underestimate subset-training privacy risks: the expanded privacy surface leaks both training and selection membership, extending risks from individual models to the broader ML ecosystem.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.