Abstract

Toxic online material has emerged as a significant issue in contemporary society as a result of the exponential increase in internet usage by individuals from all walks of life, including those with varied cultural and educational backgrounds. Automatic identification of damaging text offers a problem because it needs to differentiate between disrespectful language and hate speech. In this paper, we provide a technique for automatically categorizing literature into the categories of hateful and non-hateful. This study discusses the difficulty of automatically identifying hate speech. It is examined how machine learning and natural language processing may be combined in various ways. Following that, the experiment results are contrasted in terms of how well they apply to this project. When we examine the models under consideration and fine-tune the model that produces the greatest performance accuracy after testing it against test data, we get a 94% accuracy rate.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.