Abstract

High-dimensional and sparse (HiDS) matrices are commonly encountered in many big data-related industrial applications like recommender systems. When acquiring useful patterns from them, non-negative matrix factorization (NMF) models have proven to be highly effective because of their fine representativeness of non-negative data. However, current NMF techniques suffer from a) inefficiency in addressing HiDS matrices, and b) constrained training schemes lack of flexibility, extensibility and adaptability. To address these issues, this work proposes to factorize industrial-size sparse matrices via a novel Inherently Non-negative Latent Factor (INLF) model. It connects the output factors and decision variables via a single-element-dependent sigmoid function, thereby innovatively removing the non-negativity constraints from its training process without impacting the solution accuracy. Hence, its training process is unconstrained, highly flexible and compatible with general learning schemes. Experimental results on five HiDS matrices generated by industrial applications indicate that INLF is able to acquire non-negative latent factors from them in a more efficient manner than any existing method does.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.