Abstract

Social media has provided everyone to express views and communicate to masses, but it also becomes a place for hateful behavior, abusive language, cyber-bullying and personal attacks. However, determining comment or a post is abusive or not is still difficult and time consuming, most of the social media platforms still searching for more efficient ways for efficient moderate solution. Automating this will help in identifying abusive comments, and save the websites and increase user safety and improve discussions online. In this paper, Kaggle’s toxic comment dataset is used to train deep learning model and classifying the comments in following categories: toxic, severe toxic, obscene, threat, insult, and identity hate. The dataset is trained with various deep learning techniques and analyze which deep learning model is better in the comment classification. The deep learning techniques such as long short term memory cell (LSTM) with and without word GloVe embeddings, a Convolution neural network (CNN) with or without GloVe are used, and GloVe pretrained model is used for classification

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call