Abstract

Trust is an essential requirement for effective Human-Agent interaction as artificial agents are becoming part of human society in a social context. To blend into our society and maximize their acceptability and reliability, artificial agents need to adapt to the complexity of their surroundings, like humans. This adaptation should come through knowing whom to trust by evaluating the trustworthiness of its human mate. It is therefore required to build cognitive agents with trust models that may allow them to trust humans the same way a human trusts other humans keeping under consideration all factors influencing the human agent trust mechanism. Several antecedents within the cognitive system itself and the surroundings dynamically influence the trust mechanism. Personality, as a trusted antecedent has been found to have a substantial impact in predicting human interactor’s trustworthiness that critically assists trust decision making. Current research, therefore, aims to infuse characteristics of respective humans as the antecedent of the human agent trust process. This is accomplished by incorporating into the trust model the agent’s capability to perceive the personality traits of the human interactor. The current work is focused on introducing a trustworthiness assessment model (TAMFIS) based on fuzzy inference to assess human’s trustworthiness towards artificial agents by exploring the human’s personality traits that predict trustworthiness. The artificial agent could develop its character towards its human collaborators that will help it in effective interactions. The testing of the proposed architecture is carried out using Dempster Shafer Theory of belief and estimation. It is anticipated that the proposed trust model will effectively evaluate the trustworthiness of human collaborators and develop a more reliable human–agent trust relationship.

Highlights

  • In human-agent collaborative societies, working together often involves having interdependence and the team members need to depend on each other to accomplish collaborative tasks

  • The current study aims towards modeling a trustworthiness assessment model that influences human and agent collaborative interaction

  • FUZZY-BASED SYSTEM MODEL The proposed model is designed to assess the trustworthiness of a human agent based on his inferred personality traits using the Multi-Layer Mamdani Fuzzy Inference System (MFIS)

Read more

Summary

INTRODUCTION

In human-agent collaborative societies, working together often involves having interdependence and the team members need to depend on each other to accomplish collaborative tasks. Artificial agents are required to establish a trust relationship with their human mates as humans do. Trustworthiness assessment can be performed based on the opponent’s characteristics, social setup, and situations; like whether the trustee has fulfilled his promises in past and how the trustee is gullible for breaking promises [12], [13]. Under these findings, the current study aims towards modeling a trustworthiness assessment model that influences human and agent collaborative interaction.

LITERATURE REVIEW
CONCLUSION AND FUTURE WORK
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call