ArXiv TLDR

CPCANet: Deep Unfolding Common Principal Component Analysis for Domain Generalization

🐦 Tweet
2605.05136

Yu-Hsi Chen, Abd-Krim Seghouane

cs.CV

TLDR

CPCANet introduces a deep unfolding Common Principal Component Analysis (CPCA) framework for domain generalization, achieving SOTA zero-shot transfer.

Key contributions

  • Proposes CPCANet, a novel framework for Domain Generalization using Deep Unfolding Common Principal Component Analysis.
  • Unrolls the iterative Flury-Gautschi algorithm into differentiable neural layers for end-to-end training.
  • Enforces discovery of a shared, interpretable domain-invariant subspace across diverse domains.
  • Achieves state-of-the-art zero-shot transfer performance on four standard DG benchmarks.

Why it matters

This paper addresses the critical challenge of domain generalization by introducing a novel, statistically grounded approach. CPCANet provides an interpretable, architecture-agnostic method to learn robust representations, setting new SOTA in zero-shot transfer. Its efficiency and lack of tuning make it highly practical.

Original Abstract

Domain Generalization (DG) aims to learn representations that remain robust under out-of-distribution (OOD) shifts and generalize effectively to unseen target domains. While recent invariant learning strategies and architectural advances have achieved strong performance, explicitly discovering a structured domain-invariant subspace through second-order statistics remains underexplored. In this work, we propose CPCANet, a novel framework grounded in Common Principal Component Analysis (CPCA), which unrolls the iterative Flury-Gautschi (FG) algorithm into fully differentiable neural layers. This approach integrates the statistical properties of CPCA into an end-to-end trainable framework, enforcing the discovery of a shared subspace across diverse domains while preserving interpretability. Experiments on four standard DG benchmarks demonstrate that CPCANet achieves state-of-the-art (SOTA) performance in zero-shot transfer. Moreover, CPCANet is architecture-agnostic and requires no dataset-specific tuning, providing a simple and efficient approach to learning robust representations under distribution shift. Code is available at https://github.com/wish44165/CPCANet.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.