Abstract

In the real-world social networks, hashtags are widely applied for understanding the content of an individual microblog. However, users do not always take the initiative in attaching hashtags when posting a microblog so that much effort has been invested for automatically hashtag recommendation. As a new trend, users no longer only post texts but prefer to share with multimodal data, such as images. To deal with these situations, we propose an attention-based multimodal neural network model (AMNN) to learn the representations of multimodal microblogs and recommend relevant hashtags. In this article, we convert the hashtag recommendation task into a sequence generation problem. Then, we propose a hybrid neural network approach to extract the features of both texts and images and incorporate them into the sequence-to-sequence model for hashtag recommendation. Experimental results on the data set collected on Instagram and two public data sets demonstrate that the proposed method outperforms state-of-the-art methods. Our model achieves the best performance in three different metrics: precision, recall, and accuracy. The source code of this article can be obtained from “https://github.com/w5688414/AMNN.”

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.