Abstract

In the age of internet and social media platforms, the problem of toxic remarks has become increasingly prominent. This research addresses the application of advanced deep learning technique, specifically C Bidirectional Long Short-Term Memory networks (Bi-LSTM), for the classification of toxic comments. Additionally, we employ pre-trained GloVe word embeddings to enhance the performance of the models. The project intends to increase the accuracy and efficiency of toxic comment classification, enabling platforms to automatically recognize and filter out harmful information. By utilizing Bi-LSTM architectures, which excel in capturing spatial and temporal correlations in textual data, we can effectively identify toxic language. The integration of GloVe embeddings significantly strengthens the semantic comprehension of words, contributing to more exact categorization results. Through a comprehensive analysis and evaluation of the proposed models on benchmark datasets such as the Jigsaw Multilingual Toxic Comment Classification dataset, we demonstrate the effectiveness of Bi- LSTM with GloVe embeddings in accurately identifying toxic comments. By reaching high classification accuracy, precision, recall, and F1 scores, the models highlight their potential in minimizing the harmful impact of toxic comments in online contexts. The results of this study have significant implications for online platforms, social media companies, and community moderators seeking automated solutions for content moderation. Key Words: Toxic comment classification, Bidirectional Long Short-Term Memory(BI-LSTM), GloVe embeddings, Deep learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call