Abstract

The typical aim of classification tasks is to maximize the accuracy of the predicted label for a given input. This accuracy increases with the confidence, which is the maximal value of the output units, and when the accuracy equals confidence, calibration is achieved. Herein, several methods are proposed to enhance the accuracy of inputs with similar confidence, extending significantly beyond calibration. Using the first gap between the maximal and second maximal output values, the accuracy of the inputs with similar confidence is enhanced. The extension of the confidence or confidence gap to their minimal value among a set of augmented inputs further enhances the accuracy of inputs with similar confidence. Enhanced accuracies are demonstrated on EfficientNet-B0 trained on ImageNet and CIFAR-100, and VGG-16 trained on CIFAR-100. The results suggest improved applications for high-accuracy classification tasks that require manual operation for a given fraction of low-accuracy inputs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call