Abstract
Emojis have been widely used in social media as a new way to express various emotions and personalities. However, most previous research only focused on limited features from textual information while neglecting rich emoji information in user-generated content. This study presents two novel attention-based Bi-LSTM architectures to incorporate emoji and textual information at different semantic levels, and investigate how the emoji information contributes to the performance of personality recognition tasks. Specifically, we first extract emoji information from online user-generated content, and concatenate word embedding and emoji embedding based on word and sentence perspectives. We then obtain the document representations of all users from the word and sentence levels during the training process and feed them into the attention-based Bi-LSTM architecture to predict the Big Five personality traits. Experimental results show that the proposed methods achieve state-of-the-art performance over the baseline models on the real dataset, demonstrating the usefulness and contribution of emoji information in personality recognition tasks. The findings could help researchers and practitioners better understand the rich semantics of emoji information and provide a new way to introduce emoji information into personality recognition tasks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.