Abstract

Automatic monitoring of user-generated content on social networking sites (SNSs) aims at detecting potential harm for adolescents by means of text and image mining techniques and subsequent actions by the providers (e.g. blocking users, legal action). Evidently, current research is primarily focused on its technological development. However, involving adolescents' voices regarding the desirability of this monitoring is important; particularly because automatic monitoring might invade adolescents' privacy and freedom, and consequently evoke reactance. In this study, fourteen focus groups were conducted with adolescents (N=66) between 12 and 18years old. The goal was to obtain insights into adolescents' opinions on desirability and priorities for automatically detecting harmful content on SNSs. Opinions reflect the contention between a need for protection online versus the preservation of freedom. Most adolescents in this study are in favour of automatic monitoring for situations they perceive as uncontrollable or that they cannot solve themselves. Clear priorities for detection must be set in order to ensure the privacy and autonomy of adolescents. Moreover, monitoring actions aiming at the prevention of harm are required.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.