The recently proposed tensor correlated total variation (t-CTV) has achieved success in tensor completion. It utilizes the low-rank structure of the gradient tensor under a unified linear transform to jointly encode low-rankness and smoothness priors. However, fixed linear transforms have inherent limitations in fully characterizing gradient tensors in different directions and adapting them to tensors from diverse categories. In this work, we propose the nonlinear tensor correlated total variation (NTCTV) regularization term that leverages the low-rank correlations of the gradient tensor under the learnable nonlinear transformation, providing a more natural approach to fuse the low-rankness and smoothness priors. Specifically, our approach learns the optimal nonlinear implicit low-rank structure of the gradient tensor along different modes separately, and then achieves the expression of fused prior information in a coupled manner. Furthermore, we propose the NTCTV-based tensor completion model and design the proximal alternating minimization (PAM) algorithm to efficiently solve the optimization model. Moreover, we provide a theoretical proof of the global convergence of the algorithm to a critical point. Comprehensive experimental results for hyperspectral images, medical images, multispectral images, and videos demonstrate that the proposed method achieves substantial quantitative and qualitative improvements over many state-of-the-art tensor completion techniques.
Read full abstract