Abstract

The growing popularity of social networks and their easy acceptance of new users have the unintended consequence of fostering an environment where anonymous users can act in malicious ways. Although these platforms have many incentives to prevent such occurrences, they have not been able to cope with the sheer volume of information that must be processed. Moreover, the tendency of attackers to rapidly change strategies in response to defensive measures also poses a challenge. Hence, research on issues related to user trustworthiness on social networks is gaining traction, with many interesting studies conducted in recent years. In this work, we aim to review the present state of this field and present an analysis of the studies published between 2012 and 2020 that attempt to address this problem using various methodologies. Some of the solutions discussed in the literature can be described as bot identification protocols, while others focus on anti-spam protection, recognition of fake news, or rating the truthfulness of user-generated content. Many of these solutions offer tangible benefits in various respects, however none of them are able to provide comprehensive all-around protection against all possible types of attacks. Monitoring this scientific field is thus a key task, and this review will hopefully lead to a better understanding of the concept of online user trustworthiness by highlighting recent works that deal with this issue.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.