Abstract

Online gaming no longer has limited access, as it has become available to a high percentage of children in recent years. Consequently, children are exposed to multifaceted threats, such as cyberbullying, grooming, and sexting. Although the online gaming industry is taking concerted measures to create a safe environment for children to play and interact with, such efforts remain inadequate and fragmented. There is a vital need to develop laws and policies to regulate and build minimum standards for the industry to safeguard and protect children online on the one hand, while promoting innovations in the gaming industry to preempt such threats. Many tools have been adapted to control threats against children in the form of content filtering and parental controls, thereby restricting contact with children to protect them from child predators. Different approaches utilizing machine learning (ML) techniques to detect child predatory behavior have been designed to provide potential detection and protection in this context. In this paper, we survey online threats to children in the gaming environment and present the limitations of existing solutions that address these threats. We also aimed to present the challenges that ML techniques face in protecting children against predatory behavior by presenting a systematic review of the available techniques in the literature. Therefore, this analysis provides not only recommendations to stakeholders to develop policies and practices that safeguard children when gaming, but also to the gaming industry to continue providing appropriate measures for a safe and entertaining gaming environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call