Abstract

AbstractSeveral sophisticated machine learning tools (e.g., ensembles or deep networks) have shown outstanding performance in different regression forecasting tasks. In many real world application domains the numeric predictions of the models drive important and costly decisions. Nevertheless, decision makers frequently require more than a black box model to be able to “trust” the predictions up to the point that they base their decisions on them. In this context, understanding these black boxes has become one of the hot topics in Machine Learning research. This paper proposes a series of visualization tools that explain the relationship between the expected predictive performance of black box regression models and the values of the input variables of any given test case. This type of information thus allows end‐users to correctly assess the risks associated with the use of a model, by showing how concrete values of the predictors may affect the performance of the model. Our illustrations with different real world data sets and learning algorithms provide insights on the type of usage and information these tools bring to both the data analyst and the end‐user. Furthermore, a thorough evaluation of the proposed tools is performed to showcase the reliability of this approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.