Abstract

Recently, variational autoencoder (VAE), a model that supports Bayesian inference and variational posterior distribution approximation, has been used to address the prevailing challenge of inefficient modelling of non-linear user-item interactions in collaborative filtering (CF). However, the choices of prior distributions in these VAE-based CF models are inadequate to determine the underlying distributions, leading to skewed maximum likelihood estimation of model parameters. In this paper, we present an Implicit Optimal Variational autoencoder model for collaborative filtering (IOVA-CF) to alleviate this challenge. Specifically, IOVA-CF utilizes a novel implicit optimal prior, which aids in generating excellent latent representations. This implicit optimal prior (IoP) is an aggregated posterior prior (AP), that is, the posterior’s expectation of the data distribution. However, unlike previous AP, IoP utilizes the density ratio technique to compute the Kullback-Leibler (KL) divergence without explicitly capturing the aggregated posterior. Besides, we can calculate the KL divergence between the aggregated posterior and the inference network in a closed-form using the density ratio technique, making IoP an optimal prior for maximizing the objective function. Additionally, unlike earlier VAE-based recommendation models, IOVA-CF significantly alleviates the over-regularization issue. Furthermore, IOVA-CF can adequately capture the latent space’s uncertainty. Finally, empirical evaluations with several competitive baseline models on four (4) real-world datasets reveal the superior performance of IOVA-CF.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call