Abstract

AbstractArtificial Intelligence relies on the application of machine learning models which, while reaching high predictive accuracy, lack explainability and robustness. This is a problem in regulated industries, as authorities aimed at monitoring the risks arising from the application of Artificial Intelligence methods may not validate them. No measurement methodologies are yet available to jointly assess accuracy, explainability and robustness of machine learning models. We propose a methodology which fills the gap, extending the Forward Search approach, employed in robust statistical learning, to machine learning models. Doing so, we will be able to evaluate, by means of interpretable statistical tests, whether a specific Artificial Intelligence application is accurate, explainable and robust, through a unified methodology. We apply our proposal to the context of Bitcoin price prediction, comparing a linear regression model against a nonlinear neural network model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call