ArXiv TLDR

Direct Discrepancy Replay: Distribution-Discrepancy Condensation and Manifold-Consistent Replay for Continual Face Forgery Detection

🐦 Tweet
2604.12941

Tianshuo Zhang, Haoyuan Zhang, Siran Peng, Weisong Zhao, Xiangyu Zhu + 1 more

cs.CV

TLDR

This paper introduces Direct Discrepancy Replay (DDR) for continual face forgery detection, using distribution discrepancy condensation and manifold-consistent replay to prevent forgetting.

Key contributions

  • Introduces Distribution-Discrepancy Condensation (DDC) to model and condense real-to-fake discrepancies.
  • Proposes Manifold-Consistent Replay (MCR) to synthesize replay samples from DDC maps and current real faces.
  • Outperforms baselines in CFFD with minimal memory, avoiding raw historical image storage.
  • Reduces identity leakage risk compared to selection-based replay methods.

Why it matters

Existing continual face forgery detection methods struggle with memory, privacy, or are tied to past decisions. This work offers a novel, memory-efficient, and privacy-preserving approach by replaying distribution discrepancies instead of raw data. It significantly advances CFFD by mitigating catastrophic forgetting and reducing identity leakage.

Original Abstract

Continual face forgery detection (CFFD) requires detectors to learn emerging forgery paradigms without forgetting previously seen manipulations. Existing CFFD methods commonly rely on replaying a small amount of past data to mitigate forgetting. Such replay is typically implemented either by storing a few historical samples or by synthesizing pseudo-forgeries from detector-dependent perturbations. Under strict memory budgets, the former cannot adequately cover diverse forgery cues and may expose facial identities, while the latter remains strongly tied to past decision boundaries. We argue that the core role of replay in CFFD is to reinstate the distributions of previous forgery tasks during subsequent training. To this end, we directly condense the discrepancy between real and fake distributions and leverage real faces from the current stage to perform distribution-level replay. Specifically, we introduce Distribution-Discrepancy Condensation (DDC), which models the real-to-fake discrepancy via a surrogate factorization in characteristic-function space and condenses it into a tiny bank of distribution discrepancy maps. We further propose Manifold-Consistent Replay (MCR), which synthesizes replay samples through variance-preserving composition of these maps with current-stage real faces, yielding samples that reflect previous-task forgery cues while remaining compatible with current real-face statistics. Operating under an extremely small memory budget and without directly storing raw historical face images, our framework consistently outperforms prior CFFD baselines and significantly mitigates catastrophic forgetting. Replay-level privacy analysis further suggests reduced identity leakage risk relative to selection-based replay.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.