The propagation of immoral content on social media poses substantial worries to online societal well-being and communication standards. While beneficial, traditional machine learning (ML) methods fall short of capturing the difficulty of textual and sequential data. This work reports this gap by suggesting a deep learning-based technique for detecting immoral posts on social media. The proposed model presents a fine-tuned Bidirectional Encoder representation from Transformers (BERT) with word embedding methods. Word2Vec and Global Vectors for Word Representation (GloVe) are employed to improve the identification of immoral posts on social media platforms to advance detection accuracy and strength. The incentive behind this study stems from the increasing demand for more sophisticated methods to struggle with damaging content. The proposed model is considered to capture the complicated patterns and semantic nuances in immoral posts by decreasing the dependence on manual feature engineering. The model is trained and assessed using benchmark datasets containing SARC and HatEval, which deliver a detailed set of labelled user-generated posts. The proposed model shows the best performance compared to traditional ML approaches. The fine-tuned Bert-based Word2Vec embeddings achieved a precision of 95.68%, recall of 96.85 %, and F1 scores of 96.26% on the SARC dataset. Fine-tuned Bert-based GloVe on the HatEval dataset achieved superior precision of 96.65, recall of 97.75, and F1-score of 97.20. The proposed results highlight the potential of the deep learning (DL) approach and fine-tuned BERT models, considerably refining the detection of unethical content on social networks.
Read full abstract- Home
- Search
Year
Publisher
Journal
1
Institution
Institution Country
Publication Type
Field Of Study
Topics
Open Access
Language
Reset All
Filter 1
Cancel
Year
Publisher
Journal
1
Institution
Institution Country
Publication Type
Field Of Study
Topics
Open Access
Language
Reset All
Filter 1
Export
Sort by: Relevance