Differentially Private Contrastive Learning via Bounding Group-level Contribution
Kecen Li, Chen Gong, Zinan Lin, Tianhao Wang, Xiaokui Xiao
TLDR
DP-GCL improves differentially private contrastive learning by bounding group-level contributions, significantly enhancing utility while preserving privacy.
Key contributions
- Proposes DP-GCL, a framework for differentially private contrastive learning that reduces inter-sample dependency.
- Limits gradient influence by partitioning batches into small groups and restricting negative samples within groups.
- Introduces intra-group augmentation to maintain negative sample diversity without increasing privacy cost.
- Achieves state-of-the-art results, improving image classification by 5.6% and image-text retrieval by 20.1%.
Why it matters
Existing DP contrastive learning methods struggle with utility due to high inter-sample dependency. This paper addresses this by localizing gradient influence. DP-GCL offers a principled approach to learn robust, privacy-preserving representations, significantly advancing practical AI deployment on sensitive data.
Original Abstract
Differentially private (DP) contrastive learning aims to learn general-purpose representations from sensitive data, alleviating the privacy leakage concerns of organizations deploying or sharing embedding models trained on private user content. However, existing approaches suffer from severe utility degradation due to the over-strong inter-sample dependency inherent in standard contrastive objectives, where each sample's gradient depends on all other samples in the batch, amplifying the impact of DP noise. In this work, we argue that effective DP contrastive learning requires explicitly reducing such intrinsic inter-sample reliance. To this end, we propose DP-GCL, a principled DP contrastive learning framework that structurally limits gradient dependency through bounding group-level contribution. DP-GCL partitions each batch into small, disjoint groups and restricts available negative samples to within-group samples, thereby localizing gradient influence and reducing sensitivity. To counteract the resulting loss of negative sample diversity, we further introduce intra-group augmentation, which generates additional negative views without increasing privacy cost. Extensive experiments across eight datasets demonstrate that DP-GCL consistently advances the state of the art in both uni-modal and multi-modal contrastive learning under practical privacy budgets: it improves image classification accuracy by 5.6% and image-text retrieval accuracy by 20.1% over existing DP contrastive methods.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.