Abstract

Orthogonal Nonnegative Matrix Factorization (ONMF) offers an important analytical vehicle for addressing many problems. Encouraged by record-breaking successes attained by neural computing models in solving an assortment of data analytics tasks, a rich collection of neural computing models has been proposed to perform ONMF with compelling performance. Such existing models can be broadly classified into the shallow-layered structure (SLS) based and deep-layered structure (DLS) based models. However, SLS models cannot capture complex relationships and hierarchical information latent in a matrix due to their simple network structures and DLS models rely on an iterative procedure to derive weights, leading to a less efficient solution process and cannot be reused to factorize new matrices. To overcome these shortcomings, this paper proposes a novel deep autoencoder network for ONMF, which is abbreviated as DAutoED-ONMF. Compared with SLS models, the newly proposed model is capable of generating solutions with good interpretability and solution uniqueness like original SLS models, yet the new model attains a superior learning capability thanks to its deep structure employed. In comparison with DLS models, the new model trains a reusable encoder network to directly factorize any given matrix with no need to repeatedly retrain the model for factorizing multiple matrices using a tailor-designed network training procedure. Proof of the procedure’s convergence is presented with an analysis of its computational complexity. The numerical experiments conducted on several publicly data sets convincingly demonstrate that the proposed DAutoED-ONMF model gains promising performance in terms of multiple metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call