The tensor train rank (TT-rank) has achieved promising results in tensor completion due to its ability to capture the global low-rankness of higher-order (>3) tensors. On the other hand, in recent times, quaternions have demonstrated their remarkable suitability as a framework for encoding color pixels, leading to exceptional performance across a range of color image processing tasks. In this paper, we encode the three channels of color pixels using the three imaginary parts of quaternions, leveraging the structural advantages of quaternions to fully preserve the potential relationships between color pixel channels. Subsequently, we extend the TT-rank to higher-order quaternion tensors to capture the global low-rank structure of higher-dimensional data. Specifically, the quaternion tensor train (QTT) decomposition is presented, and based on that the quaternion TT-rank (QTT-rank) is naturally defined. In addition, to utilize the local sparse prior of the quaternion tensor, a general and flexible transform framework is defined. Combining both the global low-rank and local sparse priors of the quaternion tensor, we propose a novel quaternion tensor completion model, i.e., QTT-rank minimization with sparse regularization in a transformed domain. Furthermore, to facilitate QTT-rank minimization for processing color images and enhancing its performance with color videos, we extend KA, a tensor augmentation method, to quaternion tensors, introducing quaternion KA (QKA). Numerical experiments conducted on color image and color video inpainting tasks showcase the superiority of the proposed method over state-of-the-art alternatives.
Read full abstract