• We propose a joint training methodology for knowledge distillation and open set recognition to improve model robustness. • We demonstrate how knowledge distillation (KD) and open set recognition (OSR) can be performed on 3D point cloud data • The proposed method provides a much smaller model which additionally has enhanced OSR capabilities than the original model. • We find that KD by itself can transfer OSR ability to a student model apart from previously known dark knowledge transfer. • We show how there is a trade-off that comes into play for the student model’s open-set and closed-set performance. Real-world scenarios pose several challenges to deep learning based computer vision techniques despite their tremendous success in research. Deeper models provide better performance, but are challenging to deploy and knowledge distillation allows us to train smaller models with minimal loss in performance. A model also has to deal with open set samples from classes outside the ones it was trained on and should be able to identify them as unknown samples while classifying the known ones correctly. Finally, most existing image recognition research focuses only on using two-dimensional snapshots of the three-dimensional real world objects. In this work, we attempt to bridge these three research fields, which have been developed independently until now, despite being deeply interrelated in practice. We propose a joint knowledge distillation and open set recognition training methodology for three-dimensional object recognition. We demonstrate the effectiveness of the proposed method via various experiments on how it allows us to obtain a much smaller model, which takes a minimal hit in performance while being capable of open set recognition for 3D point cloud data.
Read full abstract