Abstract

Abstract Dealing with preferences is a ubiquitous part of our daily lives, and thus, preference forms an integral part of our daily decision-making processes. If we consider preferences among groups of agents, deliberation often leads to changes in preferences of an agent, influenced by the opinions of others, depending on how reliable these agents are according to the agent under consideration. Sometimes, it also leads to changes in the opposite direction, i.e. reliability over agents gets updated depending on their preferences. There are various formal studies of preference change based on reliability and/or trust, but not the other way around—this work contributes to the formal study of the latter aspect, i.e. on reliability change based on agent preferences. In process, some policies of preference change based on agent reliabilities are also discussed. We introduce a novel formal study of the relevant changes in agent reliabilities in decision-making scenarios, which is an integral part of artificial intelligence studies in the current day. A two-dimensional hybrid language is proposed to describe such processes, and axiomatizations and decidability are discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.