ArXiv TLDR

CoreFlow: Low-Rank Matrix Generative Models

🐦 Tweet
2604.24959

Dongze Wu, Linglingzhi Zhu, Yao Xie

cs.LGstat.ML

TLDR

CoreFlow is a low-rank flow model that efficiently learns matrix distributions, especially from high-dimensional, limited, or incomplete data.

Key contributions

  • CoreFlow learns shared low-rank subspaces and trains a flow on the low-dimensional core for matrix generation.
  • Designed for high-dimensional, limited-sample settings, separating shared geometry from sample variation.
  • Handles incomplete training data using masked Riemannian updates and iterative completion.
  • Significantly improves generation quality in few-sample regimes, even with high compression or missing data.

Why it matters

Learning matrix distributions from high-dimensional, limited, or incomplete data is difficult. CoreFlow efficiently models low-rank geometry, significantly improving generative quality in data-scarce and complex settings.

Original Abstract

Learning matrix-valued distributions from high-dimensional and possibly incomplete training data is challenging: ambient-space generative modeling is computationally expensive and statistically fragile when the matrix dimension is large but the sample size is limited. We propose CoreFlow, a geometry-preserving low-rank flow model that learns shared row/column subspaces across the matrix distribution, and then trains a continuous normalizing flow only on the induced low-dimensional core. CoreFlow is designed for settings where shared low-rank matrix geometry is present, especially in high-dimensional limited-sample regimes. This separates shared matrix geometry from sample-specific variation, preserves matrix structure, and substantially improves training efficiency. The same framework also handles incomplete training matrices through masked Riemannian updates and iterative completion. Across real and synthetic benchmarks, CoreFlow substantially improves spectral and moment-level generation quality in few-sample regimes while remaining competitive in data-rich settings, even under compression to 9% of the ambient dimension and with up to 40% missing training entries.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.