Abstract

Recommendations for events play a pivotal role in facilitating the discovery of upcoming intriguing events within Event-Based Social Networks (EBSNs). Previous research has established the crucial significance of mining contextual features and implicit relationships to enhance recommendation performance and alleviate data sparsity issues. However, the noise inherent in contextual features exacerbates data sparsity and hampers the ability of previous methods to explore implicit relationships for mitigating data sparsity. To address this challenge, we propose a variational type graph autoencoder model that attenuates the influence of noise in different types of context features by introducing type-specific latent variables. Firstly, we introduce a heterogeneous denoising convolution module composed of two components: 1) Denoising attention aggregation is proposed to mitigate the influence of noisy structures and uncover implicit relationships. 2) A heterogeneous normalization module leverages context features within the same type to alleviate the effects of noise in context features and data sparsity. Furthermore, we propose a learnable heterogeneous mixture prior that assists in assigning different priors to distinct types of latent variables, effectively modeling different types of contextual features. Through comprehensive experiments conducted on real-world datasets, we demonstrate the compelling performance of our model compared to state-of-the-art competitive approaches.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.