Abstract
In hybrid situations where artificial agents and human agents interact, the artificial agents must be able to reason about the trustworthiness and deceptive actions of their human counterpart. Thus a theory of trust and deception is needed that will support interactions between agents in virtual societies. There are several theories on trust (fewer on deception!), but none that deals specifically with virtual communities. Building on these earlier theories, the role of trust and deception in virtual communities is analyzed, with examples to illustrate the objectives a theory of trust should fulfill.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have