Benchmarking the Utility of Privacy-Preserving Cox Regression Under Data-Driven Clipping Bounds: A Multi-Dataset Simulation Study
Keita Fukuyama, Yukiko Mori, Tomohiro Kuroda, Hiroaki Kikuchi
TLDR
This study benchmarks differential privacy's impact on Cox regression utility, finding significant performance degradation at standard privacy levels.
Key contributions
- Systematically evaluated DP on Cox regression utility across 5 datasets, 15 $\varepsilon$ levels, and 1000 iterations.
- At standard DP ($\varepsilon \leq 1$), ~90% of significant covariates lost significance, with C-index approaching 0.5.
- Input perturbation (covariates only) preserved risk-set structure and achieved the best recovery among input methods.
- Output perturbation (dfbeta-based sensitivity) maintained near-baseline performance at $\varepsilon \geq 5$.
Why it matters
Differential privacy is vital for data sharing, but its utility impact on survival analysis, particularly Cox regression, is underexplored. This study provides a comprehensive benchmark, revealing substantial utility loss at standard DP levels. It offers insights into practical $\varepsilon$ values and perturbation strategies to balance privacy and utility.
Original Abstract
Differential privacy (DP) is a mathematical framework that guarantees individual privacy; however, systematic evaluation of its impact on statistical utility in survival analyses remains limited. In this study, we systematically evaluated the impact of DP mechanisms (Laplace mechanism and Randomized Response) with data-driven clipping bounds on the Cox proportional hazards model, using 5 clinical datasets ($n = 168$--$6{,}524$), 15 levels of $\varepsilon$ (0.1--1000), and $B = 1{,}000$ Monte Carlo iterations. The data-driven clipping bounds used here are observed min/max and therefore do not provide formal $\varepsilon$-DP guarantees; the results represent an optimistic lower bound on utility degradation under formal DP. We compared three types of input perturbations (covariates only, all inputs, and the discrete-time model) with output perturbations (dfbeta-based sensitivity), using loss of significance rate (LSR), C-index, and coefficient bias as metrics. At standard DP levels ($\varepsilon \leq 1$), approximately 90% (90--94%) of the significant covariates lost significance, even in the largest dataset ($n = 6{,}524$), and the predictive performance approached random levels (test C-index $\approx 0.5$) under many conditions. Among the input perturbation approaches, perturbing only covariates preserved the risk-set structure and achieved the best recovery, whereas output perturbation (dfbeta-based sensitivity) maintained near-baseline performance at $\varepsilon \geq 5$. At $n \approx 3{,}000$, the significance recovered rapidly at $\varepsilon = 3$--10; however, in practice, $\varepsilon \geq 10$ (for predictive performance) to $\varepsilon \geq 30$--60 (for significance preservation) is required. In the moderate-to-high $\varepsilon$ range, false-positive rates increased for variables whose baseline $p$-values were near the significance threshold.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.