ArXiv TLDR

TADP-RME: A Trust-Adaptive Differential Privacy Framework for Enhancing Reliability of Data-Driven Systems

🐦 Tweet
2604.08113

Labani Halder, Payel Sadhukhan, Sarbani Palit

cs.CRcs.AIcs.LG

TLDR

TADP-RME is a trust-adaptive differential privacy framework using reverse manifold embedding to enhance data-driven system reliability against inference attacks.

Key contributions

  • Proposes TADP-RME, a framework for trust-adaptive differential privacy in data-driven systems.
  • Dynamically adjusts privacy budget using an inverse trust score for flexible utility-privacy trade-offs.
  • Uses Reverse Manifold Embedding to nonlinearly transform data, disrupting geometric attacks while preserving DP.
  • Achieves improved privacy-utility, reducing inference attack success rates by up to 3.1% without utility loss.

Why it matters

Existing differential privacy schemes struggle with varying user trust and are vulnerable to inference attacks due to fixed budgets and preserved geometric structure. TADP-RME offers a unified solution by adaptively modulating privacy and disrupting data geometry, significantly enhancing reliability.

Original Abstract

Ensuring reliability in adversarial settings necessitates treating privacy as a foundational component of data-driven systems. While differential privacy and cryptographic protocols offer strong guarantees, existing schemes rely on a fixed privacy budget, leading to a rigid utility-privacy trade-off that fails under heterogeneous user trust. Moreover, noise-only differential privacy preserves geometric structure, which inference attacks exploit, causing privacy leakage. We propose TADP-RME (Trust-Adaptive Differential Privacy with Reverse Manifold Embedding), a framework that enhances reliability under varying levels of user trust. It introduces an inverse trust score in the range [0,1] to adaptively modulate the privacy budget, enabling smooth transitions between utility and privacy. Additionally, Reverse Manifold Embedding applies a nonlinear transformation to disrupt local geometric relationships while preserving formal differential privacy guarantees through post-processing. Theoretical and empirical results demonstrate improved privacy-utility trade-offs, reducing attack success rates by up to 3.1 percent without significant utility degradation. The framework consistently outperforms existing methods against inference attacks, providing a unified approach for reliable learning in adversarial environments.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.