Abstract

An AutoEncoder (AE)-based latent factor analysis model can precisely extract non-linear latent features from a High-dimensional and Sparse (HiDS) matrix from a recommender system. However, it requires prefilling an HiDS matrix's unknown data to achieve its compatibility with a GPU platform, which leads to tremendous consumption of computation and storage. To address this issue, this paper presents a CUDA-Parallelized Fast AutoEncoder (CPFAE) for highly efficient latent factor analysis on a high-dimensional and sparse matrix from a recommender system. Its main idea is two-fold: a) implementing mini-batch-based weight update in the form of efficient sparse matrix multiplication to train the neural network, and b) implementing an efficient computation model for a compressed sparse matrix to make full use of a GPU platform's computation power. Experimental results on two HiDS matrices from real applications demonstrate that compared with a state-of-the-art AE-based model, CPFAE achieves significant gain in computation and storage efficiency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.