DPrivBench: Benchmarking LLMs' Reasoning for Differential Privacy
Erchi Wang, Pengrun Huang, Eli Chien, Om Thakkar, Kamalika Chaudhuri + 2 more
TLDR
DPrivBench is a new benchmark evaluating large language models' ability to reason about differential privacy, revealing their current limitations.
Key contributions
- Introduces DPrivBench, a novel benchmark for evaluating LLMs' differential privacy reasoning.
- Covers diverse DP topics and difficulty levels, designed to prevent shortcut reasoning.
- Experiments show LLMs handle basic DP mechanisms but struggle with advanced algorithms.
- Identifies key limitations and future research directions for automated DP reasoning.
Why it matters
This paper addresses the high barrier to entry for differential privacy by exploring LLM automation. DPrivBench provides a critical tool for evaluating and improving LLMs' ability to reason about DP, guiding future research to make privacy more accessible.
Original Abstract
Differential privacy (DP) has a wide range of applications for protecting data privacy, but designing and verifying DP algorithms requires expert-level reasoning, creating a high barrier for non-expert practitioners. Prior works either rely on specialized verification languages that demand substantial domain expertise or remain semi-automated and require human-in-the-loop guidance. In this work, we investigate whether large language models (LLMs) can automate DP reasoning. We introduce DPrivBench, a benchmark in which each instance asks whether a function or algorithm satisfies a stated DP guarantee under specified assumptions. The benchmark is carefully designed to cover a broad range of DP topics, span diverse difficulty levels, and resist shortcut reasoning through trivial pattern matching. Experiments show that while the strongest models handle textbook mechanisms well, all models struggle with advanced algorithms, revealing substantial gaps in current DP reasoning capabilities. Through further analytic study and failure-mode analysis, we identify several promising directions for improving automated DP reasoning. Our benchmark provides a solid foundation for developing and evaluating such methods, and complements existing benchmarks for mathematical reasoning.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.