Abstract

Few-shot object detection is a challenging task aimed at recognizing novel classes and localizing with limited labeled data. Although substantial achievements have been obtained, existing methods mostly struggle with forgetting and lack stability across various few-shot training samples. In this paper, we reveal two gaps affecting meta-knowledge transfer, leading to unstable performance and forgetting in meta-learning-based frameworks. To this end, we propose sample normalization, a simple yet effective method that enhances performance stability and decreases forgetting. Additionally, we apply Z-score normalization to mitigate the hubness problem in high-dimensional feature space. Experimental results on the PASCAL VOC data set demonstrate that our approach outperforms existing methods in both accuracy and stability, achieving up to +4.4 mAP@0.5 and +5.3 mAR in a single run, with +4.8 mAP@0.5 and +5.1 mAR over 10 random experiments on average. Furthermore, our method alleviates the drop in performance of base classes. The code will be released to facilitate future research.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.