Abstract

Concerns about the frequency of harmful remarks have been raised by the growth of online communication platforms, which makes it difficult to create inclusive and safe digital spaces. This study explores the creation of a strong framework that uses machine learning algorithms and natural language processing (NLP) methods to categorise harmful comments. In order to improve the accuracy and comprehensiveness of categorization, the study investigates the integration of personality trait analysis in addition to identifying hazardous language. A wide range of online comments comprised the dataset that was gathered and put through extensive preparation methods such as text cleaning, lemmatization, and feature extraction. To facilitate the training and assessment of machine learning models, textual data was converted into numerical representations by utilising TF-IDF vectorization and word embeddings. Furthermore, personality traits were extracted from comments using sentiment analysis and language clues, which linked linguistic patterns with behavioural inclinations. The study resulted in the development and assessment of complex categorization models that combined features from textual content and inferred personality traits. The findings show encouraging associations between specific personality qualities and the use of toxic language, providing opportunities to identify subtle differences in toxic comment contexts. In order to provide insights into developing more sophisticated and successful methods of reducing toxicity in online discourse, this study outlines the methodology, major findings, and consequences of incorporating personality traits analysis into the classification of toxic comments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call