Abstract
Many computer vision problems can be posed as learning a low-dimensional subspace from high-dimensional data. The low rank matrix factorization (LRMF) represents a commonly utilized subspace learning strategy. Most of the current LRMF techniques are constructed on the optimization problems using L1-norm and L2-norm losses, which mainly deal with the Laplace and Gaussian noises, respectively. To make LRMF capable of adapting more complex noise, this paper proposes a new LRMF model by assuming noise as mixture of exponential power (MoEP) distributions and then proposes a penalized MoEP (PMoEP) model by combining the penalized likelihood method with MoEP distributions. Such setting facilitates the learned LRMF model capable of automatically fitting the real noise through MoEP distributions. Each component in this mixture distribution is adapted from a series of preliminary superor sub-Gaussian candidates. Moreover, by facilitating the local continuity of noise components, we embed Markov random field into the PMoEP model and then propose the PMoEP-MRF model. A generalized expectation maximization (GEM) algorithm and a variational GEM algorithm are designed to infer all parameters involved in the proposed PMoEP and the PMoEPMRF model, respectively. The superiority of our methods is demonstrated by extensive experiments on synthetic data, face modeling, hyperspectral image denoising, and background subtraction.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.