Abstract

The article’s subject matter is the process of constructing explanations for the obtained results and the sequence of decision-making actions in the intelligent information system. The goal is to develop an approach to the construction of explanations based on a possible description of cause-and-effect relationships between input data and the result ofthe work of an intelligent system, which creates conditions for the formation of an explanation when presenting such a system both in the form of "black" and in the form of "gray" box. Tasks: development of a generalized possible-causal model of explanation; development of a method of possible-causal representation of an explanation in an intellectual information system. The used approaches are: methods of constructing explanations, the theory of possibilities, approaches to the construction of temporal knowledge. Conclusions. The scientific novelty of the obtained results is as follows. A possible-causal model of explanation in an intelligent system is proposed, which specifies a cause-and-effect relationship between the class of input data and the class of decision, as well as between intermediate actions from the process of obtaining a result and the class of decision. The probabilistic aspect of the model is calculated for a subset containing representatives of the same class of data, or for individual actions of a simplified decision-making process. In practical terms, the developed model makes it possible to form a description of the decision-making process based on possible causal dependencies and to build an explanation based on limited data about the process of functioning of the intelligent information system. A method of constructing an explanation in an intelligent information system based on possible causal dependencies is proposed. The method includes the stages of determining the classes of input data and the result for explanation, forming a list of possible cause-and-effect dependencies that connect input data or process actions with the decision of an intelligent system, calculating the possibilities of using the obtained dependencies to build an explanation, calculating the need for the obtained dependencies and ordering received explanations according to necessity. The method makes it possible to build an explanation when presenting an intelligent information system both in the form of a "black" and in the form of a "gray" box, reflecting, respectively, the influence of input data and the influence of a simplified decision-making process on the result of an intelligent system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.