Existing Reject Option Classification formulations typically learn a function (e.g., softmax threshold) to select/reject uncertain classifier predictions. Such approaches leverage a user-defined cost of rejection or constraints on the accuracy or coverage of the selected predictions. We formulate a new objective for applications that have no such costs/constraints by using a natural constraint on the rejected predictions. Our proposed Reject Option Classification formulation eliminates regions of random chance classification in the decision space of any neural classifier and dataset. The goal is to maximize accuracy in the selected region while permitting a reasonable degree of prediction randomness in the rejected region. Optimally, the hope would be to reject more incorrect than correct predictions. We employ a novel selection/rejection function and learn per-class softmax thresholds using a validation set. Results demonstrate the advantages of our proposed method compared to naïvely thresholding calibrated/uncalibrated softmax scores. We evaluate 2-D points, imagery, and text classification datasets using state-of-the-art pretrained and learned models. Source code is available at https://github.com/osu-cvl/learning-idk.
Read full abstract