Abstract

High-dimensional and sparse (HiDS) matrices are commonly encountered in many big-data-related and industrial applications like recommender systems. When acquiring useful patterns from them, nonnegative matrix factorization (NMF) models have proven to be highly effective owing to their fine representativeness of the nonnegative data. However, current NMF techniques suffer from: 1) inefficiency in addressing HiDS matrices; and 2) constraints in their training schemes. To address these issues, this paper proposes to extract nonnegative latent factors (NLFs) from HiDS matrices via a novel inherently NLF (INLF) model. It bridges the output factors and decision variables via a single-element-dependent mapping function, thereby making the parameter training unconstrained and compatible with general training schemes on the premise of maintaining the nonnegativity constraints. Experimental results on six HiDS matrices arising from industrial applications indicate that INLF is able to acquire NLFs from them more efficiently than any existing method does.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call