Abstract

The subject of the research is the process of generating explanations for the decision of an artificial intelligence system. Explanations are used to help the user understand the process of reaching the result and to be able to use an intelligent information system more effectively to make practical decisions for him or her. The purpose of this paper is to develop a method for evaluating explanations taking into account differences in input data and the corresponding decision of an artificial intelligence system. The solution of this problem makes it possible to evaluate the relevance of the explanation for the internal decision-making mechanism in an intelligent information system, regardless of the user's level of knowledge about the peculiarities of making and using such a decision. To achieve this goal, the following tasks are solved: structuring the evaluation of explanations depending on their level of detail, taking into account their compliance with the decision-making process in an intelligent system and the level of perception of the user of such a system; developing a method for evaluating explanations based on their compliance with the decision-making process in an intelligent system. Conclusions. The article structures the evaluation of explanations according to their level of detail. The levels of associative dependencies, precedents, causal dependencies and interactive dependencies are identified, which determine different levels of detail of explanations. It is shown that the associative and causal levels of detail of explanations can be assessed using numerical, probabilistic, or possibilistic indicators. The precedent and interactive levels require a subjective assessment based on a survey of users of the artificial intelligence system. The article develops a method for the possible assessment of the relevance of explanations for the decision-making process in an intelligent system, taking into account the dependencies between the input data and the decision of the intelligent system. The method includes the stages of assessing the sensitivity, correctness and complexity of the explanation based on a comparison of the values and quantity of the input data used in the explanation. The method makes it possible to comprehensively evaluate the explanation in terms of resistance to insignificant changes in the input data, relevance of the explanation to the result obtained, and complexity of the explanation calculation. In terms of practical application, the method makes it possible to minimize the number of input variables for the explanation while satisfying the sensitivity constraint of the explanation, which creates conditions for more efficient formation of the interpretation based on the use of a subset of key input variables that have a significant impact on the decision obtained by the intelligent system.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.