Interpretability requirements, complex uncertain data processing, and limited training data are characteristics of classification in some real industry applications. The interval belief rule base (IBRB) can deal with various types of uncertainty and provides high interpretability. However, there is a large number of parameters in IBRB, which makes it difficult for experts to accurately set them manually, limiting its application scope. To address this issue, this paper proposes an interval rule inference network (IRIN) with interpretability for classification models to automatically generate IBRB through integrating the ideas of the IBRB and the neural network. Firstly, hybrid data with different types are transformed into an interval belief distribution for automatic generation processing. Secondly, the interval evidence reasoning method is utilized as the inference engine to transfer information ensuring the process’s interpretability. Finally, a reasonable IBRB is generated automatically by updating the parameters by employing the learning engine in the neural network. Moreover, the differentiability of the interval evidence reasoning method in the IRIN is proved as a theoretical foundation of the IRIN, and an interpretability analysis of the IRIN’s structures is discussed. Experimental results demonstrate that the proposed method possesses high interpretability, enhancing the reliability of classification and maintaining the accuracy. Its application in an actual engineering case illustrates that it is particularly suitable for engineering problems where the explanation of results is a critical requirement.
Read full abstract