Abstract

The number of discussion rounds and harmony degree of decision makers are two crucial efficiency measures to be considered in the design of the consensus-reaching process for the group decision-making problems. Adjusting the feedback parameter and importance weights of the decision makers in the recommendation mechanism has a great impact on these efficiency measures. This work aims to propose novel and efficient reinforcement learning-based adjustment mechanisms to address the tradeoff between the aforementioned measures. To employ these adjustment mechanisms, we propose to extract the dynamics of state transition from consensus models based on the distributed trust functions and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$Z$ </tex-math></inline-formula> -Numbers in order to convert the decision environment into a Markov decision process. Two independent reinforcement learning agents are then trained via a deep deterministic policy gradient algorithm to adjust the feedback parameter and importance weights of decision makers. The first agent is trained toward reducing the number of discussion rounds while ensuring the highest possible level of harmony degree among the decision makers. The second agent merely speeds up the consensus reaching process by adjusting the importance weights of the decision makers. Various experiments are designed to verify the applicability and scalability of the proposed feedback and weight-adjustment mechanisms in different decision environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call