Abstract
AbstractResearchers are currently working on the development of “Explainable Artificial Intelligence” or “Explainable Artificial Intelligence (XAI)”. Such systems are designed to help the user understand the decisions made by the neural network, which will increase confidence in such systems, will allow making more effective decisions based on the results of the system operation. The first results of applying this approach allowed developers and users to study the factors that are used by the neural network to solve a specific problem and what parameters of the neural network need to be changed to improve the accuracy of its work. In addition, studying how neural networks extract, store, and transform knowledge may be useful for the future development of machine learning methods. To overcome this disadvantage of neural networks, it is proposed to consider methods for extracting rules from neural networks, which can become a link between symbolic and connectionistic models of knowledge representation in artificial intelligence. In this paper, we propose a neuro-fuzzy approach to rule extraction using time series forecasting and text recognition as examples.KeywordsArtificial IntelligenceXAIExplainable Artificial IntelligenceRules extractionNeural networks
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have