Abstract

In this paper, we argue for continuous and automatic auditing of social media adaptive behavior and outline its key characteristics and challenges. We are motivated by the spread of online misinformation, which has recently been fueled by opaque recommendations on social media platforms. Although many platforms have declared to take steps against the spread of misinformation, the effectiveness of such measures must be assessed independently. To this end, independent organizations and researchers carry out audits to quantitatively assess platform recommendation behavior and its effects (e.g., filter bubble creation tendencies). The audits are typically based on agents simulating the user behavior and collecting platform reactions (e.g., recommended items). The downside of such auditing is the cost related to the interpretation of collected data (here, some auditors are advancing automatic annotation). Furthermore, social media platforms are dynamic and ever-changing (algorithms change, concepts drift, new content appears). Therefore, audits need to be performed continuously. This further increases the need for automated data annotation. Regarding the data annotation, we argue for the application of weak supervision, semi-supervised learning, and human-in-the-loop techniques.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.