Abstract

Trust models have become invaluable in dynamic scenarios, such as Internet applications, since they provide means for estimating trustworthiness of potential interaction counterparts. Currently, the majority of trust models require ratings to be expressed absolutely, that is as values from some predefined scale. However, literature shows that expressing ratings absolutely can be challenging for users and susceptible to their bias. But these issues can be tackled if instead of asking users to rate with absolute values, we ask them to express preferences between pairs of alternatives. Thus, in this paper we propose a trust model where pairwise comparisons are used as ratings and where trust is expressed as a strict partial order induced over agents. To maintain a sound ordering, the model uses a belief revision technique that prevents contradictions that may arise when adding new information. The technique uses mechanisms that reason quantitatively about the reliability of information allowing the model to time-discount ratings as well as withstand deceit. We evaluate the model in a series of experiments and compare the results against established trust models. The results show that the model quickly adapts to changes, gracefully handles deceitful, noisy and biased information, and generally achieves good accuracy.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.