EASE: Federated Multimodal Unlearning via Entanglement-Aware Anchor Closure
Zihao Ding, Beining Wu, Jun Huang
TLDR
EASE is a new framework for federated multimodal unlearning, effectively removing forgotten data by addressing cross-modal and subspace entanglement.
Key contributions
- Identifies and addresses three "residual anchors" causing forgotten alignments in federated multimodal unlearning.
- Closes cross-modal reconstruction channels by bilaterally displacing visual and language branches.
- Isolates forget-exclusive update directions using Cosine-Sine decomposition of client subspaces.
- Introduces a direction-selective Forget Lock to bound residual drift across unlearning rounds.
Why it matters
Federated Multimodal Learning struggles with effectively unlearning data due to complex entanglement across modalities and client subspaces. EASE offers a unified, systematic approach to sever these connections, significantly improving privacy and data removal efficacy in decentralized AI. This is crucial for building more responsible and compliant multimodal systems.
Original Abstract
Federated Multimodal Learning (FML) trains multimodal models across decentralized clients while keeping their image-text pairs private. However, joint embedding training entangles forgotten knowledge across both modalities and client gradient subspaces, hindering federated unlearning. Previous federated unlearning approaches neither sever the cross-modal reconstruction channel mediated by bilinear coupling nor separate forget-exclusive update directions from those shared with retained clients. We identify an Anchor Principle for federated multimodal contrastive unlearning: forgotten alignments persist through three residual anchors arising from bilinear cross-modal coupling, principal-angle subspace entanglement, and continued federated updates. At the modality level, we show that bilateral displacement of both visual and language branches closes the cross-modal reconstruction channel. Correspondingly, our method addresses subspace entanglement through Cosine--Sine decomposition of client-update subspaces, isolating forget-exclusive directions from retain support. Moreover, we propose a direction-selective Forget Lock that bounds residual drift across rounds. Combining these strategies, we present EASE, an Entanglement-Aware Subspace Excision framework that closes all three anchor channels under a unified design. EASE demonstrates consistent superiority across multiple datasets and unlearning scenarios, for instance, matching the retrain reference to within 0.2 and 4.2 R@1 points on the forget and retain sides under client unlearning on Flickr30K with CLIP-B/32.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.