Abstract
We present a deflation method for Nonnegative Matrix Factorization (NMF) that aims to discover latent components one by one in order of importance. To do so we perform a series of individual decompositions, each of which stands for a deflation step. In each deflation we obtain a dominant component and a nonnegative residual, and then the residual is further used as an input to the next deflation in case we want to extract more components. With the help of the proposed additional inequality constraint on the residual during the optimization, the accumulated latent components at any given deflation step can approximate the input to some degree, whereas NMF with an inaccurate rank assumption often fail to do so. The proposed method is beneficial if we need efficiency in deciding the model complexity from unknown data. We derive multiplicative update rules similar to those of regular NMF to perform the optimization. Experiments on online speech enhancement show that the proposed deflation method has advantages over NMF: namely a scalable model structure, reusable parameters across decompositions, and resistance to permutation ambiguity.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.