Abstract

Deep learning (DL) systems have been remarkably successful in various applications, but they could have critical misbehaviors. To identify the weakness of a trained model and overcome it with new data collection(s), one needs to figure out the corner cases of a trained model. Constructing new datasets to retrain a DL model requires extra budget and time. Test input prioritization (TIP) techniques have been proposed to identify corner cases more effectively. The state-of-the-art TIP approach adopts a monitoring method to TIP and prioritizes based on Gini impurity; one estimates the similarity between a DL prediction probability and uniform distribution. This letter proposes a new TIP method that uses a distance between false prediction cluster (FPC) centroids in a training set and a test instance in the last-layer feature space to prioritize error-inducing instances among an unlabeled test set. We refer to the proposed method as DeepFPC. Our numerical experiments show that the proposed DeepFPC method achieves significantly improved TIP performance in several image classification and active learning tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call