Abstract

The subject of this paper is the process of evaluation of explanations in an artificial intelligence system. The aim is to develop a method for forming a possible evaluation of the correctness of explanations for the end user in an artificial intelligence system. The evaluation of the correctness of explanations makes it possible to increase the user's confidence in the solution of an artificial intelligence system and, as a result, to create conditions for the effective use of this solution. Aims: to structure explanations according to the user's needs; to develop an indicator of the correctness of explanations using the theory of possibilities; to develop a method for evaluating the correctness of explanations using the possibilities approach. The approaches used are a set-theoretic approach to describe the elements of explanations in an artificial intelligence system; a possibility approach to provide a representation of the criterion for evaluating explanations in an intelligent system; a probabilistic approach to describe the probabilistic component of the evaluation of explanations. The following results are obtained. The explanations are structured according to the needs of the user. It is shown that the explanation of the decision process is used by specialists in the development of intelligent systems. Such an explanation represents a complete or partial sequence of steps to derive a decision in an artificial intelligence system. End users mostly use explanations of the result presented by an intelligent system. Such explanations usually define the relationship between the values of input variables and the resulting prediction. The article discusses the requirements for evaluating explanations, considering the needs of internal and external users of an artificial intelligence system. It is shown that it is advisable to use explanation fidelity evaluation for specialists in the development of such systems, and explanation correctness evaluation for external users. An explanation correctness assessment is proposed that uses the necessity indicator in the theory of possibilities. A method for evaluation of explanation fidelity is developed. Conclusions. The scientific novelty of the obtained results is as follows. A possible method for assessing the correctness of an explanation in an artificial intelligence system using the indicators of possibility and necessity is proposed. The method calculates the necessity of using the target value of the input variable in the explanation, taking into account the possibility of choosing alternative values of the variables, which makes it possible to ensure that the target value of the input variable is necessary for the explanation and that the explanation is correct.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call