Abstract

With the development of artificial intelligence, we pay widespread attention to the high reliability and interpretability of resulting architectures and algorithms. As one of the main methods of machine learning, the “black-box” mechanism of neural networks lacks interpretability. In order to alleviate the low interpretability, based on belief rule-based inference methodology using the evidential reasoning approach (RIMER), we propose a convolutional rule inference network (CRIN), which is interpretable by involving rules in belief rule base (BRB) under various types of uncertain knowledge representation. Considering the influence of global data distribution on attribute weights, the determination of each attribute may be affected by all other attributes, based on the sigmoid activation function, we determine optimized attribute weights to make sure that each attribute weight is formed by all attribute values. The derivation functions with optimized attribute weights based on evidential reasoning theory also showed to ensure the feasibility of the following learning algorithm. For expressing the influence of local rules input on final output in the network, the network framework and learning algorithm of CRIN based on convolution strategy are proposed. The inference process in RIMER is used as the feedforward mechanism to ensure the interpretability of the network. Meanwhile, the gradient descent method is used as back propagation algorithm to adjust the parameters of rule base to establish a more reasonable BRB. The experimental results and comparative analysis demonstrate that the proposed CRIN has advantages in terms of interpretability and learning capability. This study suggests new trends for interpretable network and uncertain classification issues in the reasoning approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call