Abstract

In supervised learning algorithms, the class imbalance problem often leads to generate results biased towards the majority classes. Present methods used to deal with the class imbalance problem ignore a principal aspect of separating the overlapping classes. This is the reason why most of these methods are prone to overfit on the training data. To this end, we propose a novel loss function, namely Margin-Aware Adaptive-Weighted (MAAW) loss. Here, we first use the large margin softmax to leverage intra-class compactness and inter-class separability. Further to learn an unbiased representation of the classes, we put forward a dynamically weighted loss for imbalanced data classification. This weight dynamically adapts on every minibatch based on the inverse class frequencies. In addition, it takes care of the hard-to-train samples by using the confidence scores to learn discriminative hidden representations of the data. The overall framework is found to be effective when evaluated on two widely used datasets, namely CIFAR-10 and Fashion-MNIST. Additional experiments on HAM10000 and APTOS-2019 BD datasets prove the robustness of our methodology. The source codes of the proposed methodology is provided under the GitHub repository <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/rishavpramanik/MAAW</uri>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call