Abstract
The overall accuracy, macro precision, macro recall, F-score and class balance accuracy, due to their simplicity and easy interpretation, have been among the most popular metrics to measure the performance of classifiers on multi-class problems. However, on imbalance datasets, some of these metrics can be unfairly influenced by heavier classes. Therefore, it is recommended that they are used as a group and not individually. This strategy can unnecessarily complicate the model selection and evaluation in imbalance datasets. In this paper, we introduce a new metric, imbalance accuracy metric (IAM), that can be used as a solo measure for model evaluation and selection. The IAM is built up on top of the existing metrics, is simple to use, and easy to interpret. This metric is meant to be used as a bottom-line measure aiming to eliminate the need for group metric computation and simplify the model selection.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.