Abstract
Full complexity machine learning models, such as the deep neural network, are non-traceable black-box, whereas the classic interpretable models, such as linear regression models, are often over-simplified, leading to lower accuracy. Model interpretability limits the application of machine learning models in management problems, which requires high prediction performance, as well as the understanding of individual features’ contributions to the model outcome. To enhance model interpretability while preserving good prediction performance, we propose a hybrid interpretable model that combines a piecewise linear component and a nonlinear component. The first component describes the explicit feature contributions by piecewise linear approximation to increase the expressiveness of the model. The other component uses a multi-layer perceptron to increase the prediction performance by capturing the high-order interactions between features and their complex nonlinear transformations. The interpretability is obtained once the model is learned in the form of shape functions for the main effects. We also provide a variant to explore the higher-order interactions among features. Experiments are conducted on synthetic and real-world data sets to demonstrate that the proposed models can achieve good interpretability by explicitly describing the main effects and the interaction effects of the features while maintaining state-of-the-art accuracy.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have