Abstract

Representing, manipulating, and inferring trust from the user point of view certainly is a grand challenge in virtual worlds, including online games. When someone meets an unknown individual, the question is “Can I trust him/her or not?” This requires the user to have access to a representation of trust about others, as well as a set of operators to undertake inference about the trustability of other users/players. In this paper, we employ a trust representation generated from in-world data in order to feed individual trust decisions. To achieve that purpose, we assume that such a representation of trust already exists; in fact, it was proposed in another paper of ours. Thus, the focus here is on the trust mechanisms required to infer trustability of other users/players. More specifically, we use an individual trust representation deployed as a trust network as base to the inference mechanism that employs two subjective logic operators (consensus and discount) to automatically derive trust decisions. The proposed trust inference system has been validated through OpenSimulator scenarios, which has led to a 5% increase on trustability of avatars in relation to the reference scenario (without trust).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call