Abstract

With the advent of internet technologies, it has created different ways of writing anonymously, which has lead to criminal and malicious activities over social media platforms. Thus, the automatic authentication checking of the available contents is the need of the hour. Social media sites, such as Facebook, Twitter, and so on, are used heavily by the users for sharing of information about their day-to-day activities. The identity of the suspect user is matched against tweets written by the specific user in tweet-user verification process. Writing styles of different users differ from each other, due to unique word choices, emoji selection, sentence formation, and punctuation usage. We have developed a multimodal Siamese-based architecture, which uses attention between the text and emoji parts of the tweet for generating a combined representation for the tweet. Attention helps in selecting the relevant information from different modalities. Modality attention is used for fusing the two modalities (text and emoji). We have used a newly developed multimodal Twitter dataset for evaluating the performance of the proposed model. We achieved an average accuracy, precision, recall, and <inline-formula> <tex-math notation="LaTeX">$F$</tex-math> </inline-formula>-measure values of 68.50%, 78.52%, 69.47%, and 67.05%, respectively. The results show an increase of 2.14% in <inline-formula> <tex-math notation="LaTeX">$F$</tex-math> </inline-formula>-measure in comparison with the current state-of-the-art (SOTA) models for this dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call