Robust Low-Rank Tensor Completion based on M-product with Weighted Correlated Total Variation and Sparse Regularization
Biswarup Karmakar, Ratikanta Behera
TLDR
This paper introduces a robust low-rank tensor completion method using M-product TWCTV regularization to better recover corrupted high-dimensional data.
Key contributions
- Introduces a novel Tensor Weighted Correlated Total Variation (TWCTV) regularizer for robust tensor completion.
- Utilizes an M-product framework combining weighted Schatten-$p$ norm and sparse regularization.
- Adaptive weighting preserves critical tensor structures and nuanced details by reducing thresholding.
- Develops an enhanced ADMM algorithm with proven computational efficiency and convergence.
Why it matters
Existing tensor completion methods struggle with uniform regularization, losing critical data structures. This paper's TWCTV method adaptively preserves dominant singular values and sparse components. It offers superior performance in image completion, denoising, and background subtraction, making it a significant advancement.
Original Abstract
The robust low-rank tensor completion problem addresses the challenge of recovering corrupted high-dimensional tensor data with missing entries, outliers, and sparse noise commonly found in real-world applications. Existing methodologies have encountered fundamental limitations due to their reliance on uniform regularization schemes, particularly the tensor nuclear norm and $\ell_1$ norm regularization approaches, which indiscriminately apply equal shrinkage to all singular values and sparse components, thereby compromising the preservation of critical tensor structures. The proposed tensor weighted correlated total variation (TWCTV) regularizer addresses these shortcomings through an $M$-product framework that combines a weighted Schatten-$p$ norm on gradient tensors for low-rankness with smoothness enforcement and weighted sparse components for noise suppression. The proposed weighting scheme adaptively reduces the thresholding level to preserve both dominant singular values and sparse components, thus improving the reconstruction of critical structural elements and nuanced details in the recovered signal. Through a systematic algorithmic approach, we introduce an enhanced alternating direction method of multipliers (ADMM) that offers both computational efficiency and theoretical substantiation, with convergence properties comprehensively analyzed within the $M$-product framework.Comprehensive numerical evaluations across image completion, denoising, and background subtraction tasks validate the superior performance of this approach relative to established benchmark methods.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.