Limits of Personalizing Differential Privacy Budgets
TLDR
This paper reveals that personalized differential privacy budgets have significant limitations, showing a simple thresholding method is often superior.
Key contributions
- Demonstrates major limitations of personalized differential privacy budgets for mean estimation.
- Introduces a simple thresholding operator to achieve optimal effective privacy budgets.
- Quantifies the limited gains of fully personalized mechanisms compared to the thresholding baseline.
- Establishes upper bounds and identifies regimes of maximal gain for arbitrary privacy requirements.
Why it matters
This paper challenges the assumption that fully personalized differential privacy budgets are always optimal. It provides a simpler, more efficient thresholding approach that often achieves similar or better utility. This helps practitioners design more effective and simpler differentially private systems.
Original Abstract
A key technical difficulty in differential privacy is selecting a privacy budget that satisfies privacy requirements while maximizing utility. A natural and well-studied workaround is to use personalized privacy budgets, which may differ across agents. In this paper, we show that personalized budgets come with major limitations and that for mean estimation, the dominant factor is not full personalization, but rather choosing the right effective privacy budget. This can be achieved through a simple thresholding operator that we describe. Compared with this thresholding baseline, the gains obtained by fully personalized mechanisms are limited. In particular, we precisely quantify the constant-factor improvement in settings with mixed private and public datasets and in private datasets with two levels of privacy requirements. We also establish upper bounds and identify regimes of maximal gain for arbitrary privacy requirements.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.