Abstract
Ratings provided by advisors can help an advisee to make decisions, e.g., which seller to select in e-commerce. Unfair rating attacks—where dishonest ratings are provided to mislead the advisee—impact the accuracy of decision making. Current literature focuses on specific classes of unfair rating attacks, which does not provide a complete picture of the attacks. We provide the first formal study that addresses all attack behavior that is possible within a given system. We propose a probabilistic modeling of rating behavior, and apply information theory to quantitatively measure the impact of attacks. In particular, we can identify the attack with the worst impact. In the simple case, honest advisors report the truth straightforwardly, and attackers rate strategically. In real systems, the truth (or an advisor’s view on it) may be subjective, making even honest ratings inaccurate. Although there exist methods to deal with subjective ratings, whether subjectivity influences the effect of unfair rating attacks was an open question. We discover that subjectivity decreases the robustness against attacks.
Highlights
Users can help each other make decisions by sharing their opinions, especially when direct experience or evidence is insufficient
We consider two types of ways proposed by system designers to deal with subjectivity: feature-based rating, which is popularly applied in reality to help resolve conflicting emphasis on features in overall rating, and clustering advisors, which is proposed in literature to distinguish advisors with different subjectivity. These approaches aim to mitigate the influence of subjectivity, so it is interesting to study whether they improve the robustness against unfair rating attacks
We proposed a quantitative measurement of unfair rating attacks based on information theory
Summary
Users can help each other make decisions by sharing their opinions, especially when direct experience or evidence is insufficient. Malicious advisors (attackers) may deliberately provide fake or unreliable ratings to impact the decisions of some other users (advisees). This is known as an unfair rating attack. Many approaches have been proposed in the literature to improve the robustness of trust systems against unfair rating attacks. The worst-case scenario for the advisee is that the attack is the one that minimises the information leakage about the facts. Honest advisors can be subjective in rating, or have different preferences from an advisee. Clustering advisors based on their behaviour is another way to discern subjectivity difference. We propose a probabilistic rating model and an informationleakage based quantification method, as a basis of the study on unfair rating attacks throughout the paper. We study whether the existing methods of dealing with subjectivity would influence robustness against attacks
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Information Forensics and Security
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.