Abstract

The low-rank matrix completion has gained rapidly increasing attention from researchers in recent years for its efficient recovery of the matrix in various fields. Numerous studies have exploited the popular neural networks to yield low-rank outputs under the framework of low-rank matrix factorization. However, due to the discontinuity and nonconvexity of rank function, it is difficult to directly optimize the rank function via back propagation. Although a large number of studies have attempted to find relaxations of the rank function, e.g., Schatten- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$p$</tex-math></inline-formula> norm, they still face the following issues when updating parameters via back propagation: 1) These methods or surrogate functions are still non-differentiable, bringing obstacles to deriving the gradients of trainable variables. 2) Most of these surrogate functions perform singular value decomposition upon the original matrix at each iteration, which is time-consuming and blocks the propagation of gradients. To address these problems, in this paper, we develop an efficient block-wise model dubbed differentiable low-rank learning (DLRL) framework that adopts back propagation to optimize the Multi-Schatten- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$p$</tex-math></inline-formula> norm Surrogate (MSS) function. Distinct from the original optimization of this surrogate function, the proposed framework avoids singular value decomposition to admit the gradient propagation and builds a block-wise learning scheme to minimize values of Schatten- <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$p$</tex-math></inline-formula> norms. Accordingly, it speeds up the computation and makes all parameters in the proposed framework learnable according to a predefined loss function. Finally, we conduct substantial experiments in terms of image recovery and collaborative filtering. The experimental results verify the superiority of the proposed framework in terms of both runtimes and learning performance compared with other state-of-the-art low-rank optimization methods. Our codes are available at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/chenzl23/DLRL</uri> .

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.