Abstract

The comments sections of online forums and social media platforms have become the new playing field for cyber harassment. Correspondingly, various organizations and companies have decided to abolish toxic and nasty comments altogether to avoid this kind of issue. To protect authorized and genuine users from being exposed to comments which contain offensive language on online mediums or social media platforms, organizations have started flagging such comments and they are blocking those users who are using unpleasant forms of language. Most of the organizations use computerized algorithms for instinctive discovery of comment toxicity using machine learning and artificial intelligence based systems. In the present research study, we have tried to build multi headed comment toxicity detection models. We have built three toxicity detection models using deep learning techniques and compared the accuracy and results. We have also developed a menu driven interface which will help to link machine learning models which is uncomplicated for non programmers and this connection of model to interface will be convenient for making interactive programming interfaces with great accuracy and operationality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call