Abstract
Malicious attackers have been prevalent in online communities with the ability to create accounts freely and with ease on social platforms. This negatively impacts sub-communities on these platforms that often require trusted members especially when they are intended to be used for exchange of valuable goods or services. Prior research on identity deception detection on social platforms has focused on identifying features that are effective in detecting these accounts. These studies often utilize years of historical data to demonstrate the effectiveness of detection models. However, to this day, there has not been a classification and comparison of these detection features that have been used in past studies. Furthermore, for these features to become viable for wider implementation in large social platforms, a reduction in the volume of data necessary for the fine operation of these models is necessary. This paper proposes a proactive approach to detecting malicious accounts at the point of attempted entry into a sub-community. We constructed a taxonomy of features based on user data and provide a comparison of their efficacy as well as opportunities for reducing data requirements for these types of detection models. The features are generalized so that they can be applied to other social media platforms.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have