Abstract

Algorithmic selection is omnipresent in various domains of our online everyday lives: it ranks our search results, curates our social media news feeds, or recommends videos to watch and music to listen to. This widespread application of algorithmic selection on the internet can be associated with risks like feeling surveilled (S), feeling exposed to distorted information (D), or feeling like one is using the internet too excessively (O). One way in which internet users can cope with such algorithmic risks is by applying self-help strategies such as adjusting their privacy settings (Sstrat), double-checking information (Dstrat), or deliberately ignoring automated recommendations (Ostrat). This article determines the association of the theoretically derived factors risk awareness (1), personal risk affectedness (2), and algorithm skills (3) with these self-help strategies. The findings from structural equation modelling on survey data representative for the Swiss online population (N2018=1,202) show that personal affectedness by algorithmic risks, awareness of algorithmic risks and algorithm skills are associated with the use of self-help strategies. These results indicate that besides implementing statutory regulation, policy makers have the option to encourage internet users’ self-help by increasing their awareness of algorithmic risks, clarifying how such risks affect them personally, and promoting their algorithm skills.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.