ArXiv TLDR

Tight Auditing of Differential Privacy in MST and AIM

🐦 Tweet
2604.18352

Georgi Ganev, Meenatchi Sundaram Muthu Selva Annamalai, Bogdan Kulynych

cs.CRcs.AIcs.LG

TLDR

This paper introduces a GDP-based framework for tightly auditing differential privacy in synthetic data generators like MST and AIM, revealing a small theory-practice gap.

Key contributions

  • Introduces a GDP-based framework for tightly auditing differential privacy in MST and AIM.
  • Measures privacy via the full false-positive/false-negative tradeoff for robust analysis.
  • Achieves the first tight privacy audits for MST and AIM in strong-privacy regimes.

Why it matters

Auditing differential privacy in widely used synthetic data generators is crucial but challenging. This framework provides a robust method to verify privacy guarantees, helping bridge the gap between theoretical privacy and practical implementations.

Original Abstract

State-of-the-art Differentially Private (DP) synthetic data generators such as MST and AIM are widely used, yet tightly auditing their privacy guarantees remains challenging. We introduce a Gaussian Differential Privacy (GDP)-based auditing framework that measures privacy via the full false-positive/false-negative tradeoff. Applied to MST and AIM under worst-case settings, our method provides the first tight audits in the strong-privacy regime. For $(ε,δ)=(1,10^{-2})$, we obtain $μ_{emp}\approx0.43$ vs. implied $μ=0.45$, showing a small theory-practice gap. Our code is publicly available: https://github.com/sassoftware/dpmm.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.