Abstract

In Machine Learning, a supervised model's performance is measured using the evaluation metrics. In this study, we first present our motivation by revisiting the major limitations of these metrics, namely one-dimensionality, lack of context, lack of intuitiveness, uncomparability, binary restriction, and uncustomizability of metrics. In response, we propose Contingency Space, a bounded semimetric space that provides a generic representation for any performance evaluation metric. Then we showcase how this space addresses the limitations. In this space, each metric forms a surface using which we visually compare different evaluation metrics. Taking advantage of the fact that a metric's surface warps proportionally to the degree of which it is sensitive to the class-imbalance ratio of data, we introduce Imbalance Sensitivity that quantifies the skew-sensitivity. Since an arbitrary model is represented in this space by a single point, we introduce Learning Path for qualitative and quantitative analyses of the training process. Using the semimetric that contingency space is endowed with, we introduce Tau as a new cost sensitive and Imbalance Agnostic metric. Lastly, we show that contingency space addresses multi-class problems as well. Throughout this work, we define each concept through stipulated definitions and present every application with practical examples and visualizations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.