Abstract

We propose a computational model for enabling robots to automatically make decisions under risk in a human-like way. Human decision-making (DM) under risk is influenced by psychological effects, including regret effects, probability weighting effects, and range effects. On the basis of regret theory, we devise a mathematical DM model to encompass these psychological effects. To further quantify the model, we cast the model into a state-space representation and design a fuzzy logic controller to obtain desired preference data from individual decision makers. The data from each individual were used to train a personalized instance of the model. The resulting model is quantitative. It sheds light on the psychological mechanism of risk-attitudes in human DM. The prediction accuracy of the model was statistically tested. On average, the accuracy of our model is 74.7%, which is significantly close to the average accuracy of the subjects when they repeated their own previously made decisions (73.3%). Furthermore, when only the decisions that were repeated consistently by the subjects are examined, the average accuracy of our model is 86.6%. <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Note to Practitioners</i> —Task allocation in human–robot collaboration (HRC) systems is challenging because robots and humans are heterogeneous in advantages and drawbacks. Robots, despite recent development, are not comparable to humans in reliability. However, human working hours are generally more expensive. When the HRC systems have high robot-to-human ratios and are required to work for long periods, the amount of decision-making (DM) problems of assigning a task to a robot or a human is enormous. Hence, the DM process needs to be automated. It was found that the performance of a team is improved when all the team members share the same mental models. Therefore, in human-centered automation, the characteristics of human DM should be shared with robots. For this purpose, in this article we propose a human-like DM model. We first discuss the psychological effects that influence human DM under risk. We transfer these effects to the building components of the mathematical human-like DM model. However, the details of the components are undetermined. We then design the algorithms to measure these components quantitatively. We test the prediction accuracy of the automated DM model and show it is significantly more human-like than the traditional DM method. The proposed algorithms can be used for designing human-like DM models in applications that involve allocating tasks between humans and robots, such as collaborative search, collaborative assembly, and so on.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call