Abstract

The L1-norm cost function of the low-rank approximation of the matrix with missing entries is not smooth, and also cannot be transformed into a standard linear or quadratic programming problem, and thus, the optimization of this cost function is still not well solved. To tackle this problem, first, a mollifier is used to smooth the cost function. High closeness of the smoothed function to the original one can be obtained by tuning the parameters contained in the mollifier. Next, a recurrent neural network is proposed to optimize the mollified function, which will converge to a local minimum. In addition, to boost the speed of the system, the mollifying process is implemented by a filtering procedure. The influence of two mollifier parameters is theoretically analyzed and experimentally confirmed, showing that one of the parameters is critical to computational efficiency and accuracy, while the other not. A large number of experiments on synthetic data show that the proposed method is competitive to the state-of-the-art methods. In particular, the experiments on large matrices and a real application in the structure from motion indicate that the memory requirement of the proposed algorithm is mild, making it suitable for real applications that often involve large-scale matrix decomposition.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.