Abstract

We consider a distributed multi-user system where individual entities possess observations or perceptions of one another, while the truth is only known to themselves, and they might have an interest in withholding or distorting the truth. We ask the question whether it is possible for the system as a whole to arrive at the correct perceptions or assessment of all users, referred to as their reputation, by encouraging or incentivizing the users to participate in a collective effort without violating private information and self-interest. Two specific applications, online shopping and network reputation, are provided to motivate our study and interpret the results. In this paper we investigate this problem using a mechanism design theoretic approach. We introduce a number of utility models representing users' strategic behavior, each consisting of one or both of a truth element and an image element, reflecting the user's desire to obtain an accurate view of the other and an inflated image of itself. For each model, we either design a mechanism that achieves the optimal performance (solution to the corresponding centralized problem), or present individually rational sub-optimal solutions. In the latter case, we demonstrate that even when the centralized solution is not achievable, by using a simple punish-reward mechanism, not only a user has the incentive to participate and provide information, but also that this information can improve the system performance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.