Abstract

Social media platforms have fundamentally transformed how people share information and communicate. While they offer significant benefits, they also pose challenges, such as the increasing prevalence of cyberbullying. While many studies have emphasized the accuracy of text classification techniques for detecting cyberbullying, this research explores the potential of automating not just the detection but also the reporting of harmful posts. We developed a Support Vector Machine model using WEKA, designed to identify cyberbullying statements in the English language. This model yielded an accuracy of 57% with a kappa score of 0.2094. After developing the model, we extracted public posts from Twitter and applied text preprocessing methods, including cleaning and tokenization. These preprocessed data were then transformed into a Bag-of- Words (BoW) representation. When a post is identified as cyberbullying by our model, a comprehensive report is generated detailing the author's name, post content, and the timestamp. This innovative method holds promise for the timely detection of malicious content, offering social media platform administrators an efficient tool for prompt intervention.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.