Abstract

Numerous sophisticated machine learning tools (e.g. ensembles or deep networks) have shown outstanding performance in terms of accuracy on different numeric forecasting tasks. In many real world application domains the numeric predictions of the models drive important and costly decisions. Frequently, decision makers require more than a black box model to be able to “trust” the predictions up to the point that they base their decisions on them. In this context, understanding these black boxes has become one of the hot topics in Machine Learning and Data Mining research. This paper proposes a series of visualisation tools that help in understanding the predictive performance of non-interpretable regression models. More specifically, these tools allow the user to relate the expected error of any model to the values of the predictor variables. This type of information allows end-users to correctly assess the risks associated with the use of the models, by showing how concrete values of the predictors may affect the performance of the models. Our illustrations with different real world data sets and learning algorithms provide insights on the type of usage and information these tools bring to both the data analyst and the end-user.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.