Abstract

An AutoEncoder (AE)-based latent factor analysis model can precisely extract non-linear latent features from a High-dimensional and Sparse (HiDS) matrix from a recommender system. However, it requires prefilling an HiDS matrix's unknown data to achieve its compatibility with a GPU platform, which leads to tremendous consumption of computation and storage. To address this issue, this paper presents a CUDA-Parallelized Fast AutoEncoder (CPFAE) for highly efficient latent factor analysis on a high-dimensional and sparse matrix from a recommender system. Its main idea is two-fold: a) implementing mini-batch-based weight update in the form of efficient sparse matrix multiplication to train the neural network, and b) implementing an efficient computation model for a compressed sparse matrix to make full use of a GPU platform's computation power. Experimental results on two HiDS matrices from real applications demonstrate that compared with a state-of-the-art AE-based model, CPFAE achieves significant gain in computation and storage efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call