Abstract

As Factorization Machine (FM) models linearly describe feature interactions, they cannot accurately capture complex non-linear features of data. Furthermore, random initialization in these FM models seriously affects system convergence and performance. Therefore, the random embedding process of FM models may not be sufficient to capture the data information. Although deep neural networks (DNNs)-based FM models have been recently proposed for advanced feature interactions, it is difficult to train. To address these challenges, we propose a neural embedding factorization machine (NEFM) model, which effectively initializes the embedding layers based on an unsupervised pre-training framework of probabilistic auto-encoder. The NEFM smartly couples the good linearity of FM models in modeling second-order feature interactions and the advantage of DNNs in modeling non-linear feature interactions. Experimental results show the effectiveness of the proposed NEFM. For example, the performance of NEFM is enhanced by at least 6.99% than the non-pre-trained FM models. Compared with the pre-trained FM by DNNs-based models, the NEFM model reduces at least 1.02% in test errors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call