Abstract
With respect to molecular, material, and process designs, it is important to construct nonlinear regression models with high predictive ability between the features, that is, x, and the properties and activities, that is, y. The interpretations of such constructed models can help elucidate the mechanism by which x affects y. In this study, a stable and effective method for the local interpretation of regression models is proposed by improving the local interpretable model-agnostic explanations (LIME) and Shapley additive explanations (SHAP) techniques, which have been developed as methods for interpreting nonlinear regression models. By calculating the local contribution of x to y based on LIME and SHAP for the k-nearest neighbors of a target sample, a stable interpretation is possible even for models that overfit the training data. Furthermore, a method for calculating the local contribution of x to y based on simulations using a nonlinear regression model is proposed. Finally, using actual datasets consisting of the characteristics of various compounds, it is confirmed that the proposed method can locally and accurately interpret nonlinear models.Tweetable abstractA method for calculating the local contribution of x to y based on simulations using a nonlinear regression model is proposed, and could accurately interpret nonlinear models for actual datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.