Abstract

This study introduces a novel movie recommender system utilizing a Semantic-Enhanced Variational Graph Autoencoder for Movie Recommendation (SeVGAER) architecture. The system harnesses additional information from movie plot summaries scraped from the internet, transformed into semantic vectors via a large language model. These vectors serve as supplementary features for movie nodes in the SeVGAER-based recommender. The system incorporates an encoder-decoder structure, operating on a user-movie bipartite graph, and employs GraphSAGE convolutional layers with modified aggregators as the encoder to extract latent representations of the nodes, and a Multi-Layer Perceptron (MLP) as the decoder to predict ratings with additional graph-based features. To address overfitting and improve model generalization, a novel training strategy is introduced. We employ a random data splitting approach, dividing the dataset into two halves for each training instance. The model then generates outputs on each half of the data, and a new loss function is introduced to ensure consistency between these two outputs, a strategy that can be seen as a form of contrastive learning. The model’s performance is optimized using a combination of this new contrastive loss, graph reconstruction loss, and KL divergence loss. Experiments conducted on the Movielens100k dataset demonstrate the effectiveness of this approach in enhancing movie recommendation performance

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.