Abstract

With the advantage of using only a limited number of samples, few-shot learning has been developed rapidly in recent years. It is mostly applied in the object classification or detection of a small number of samples which is typically less than ten. However, there is not much research related to few-shot detection, especially one-shot detection. In this paper, the multifeature information-assisted one-shot detection method is proposed to improve the accuracy of one-shot object detection. Specifically, two auxiliary modules are applied to the detection algorithm: Semantic Feature Module (SFM) and Detail Feature Module (DFM), which, respectively, extract semantic feature information and detailed feature information of samples in the support set. Then these two kinds of information are then calculated with the feature image extracted from the query image to obtain the corresponding auxiliary information that is used to complete one-shot detection. Thanks to the two auxiliary modules, which can retain more semantic and detailed information of samples in the support set, the proposed method can enhance the utilization rate of sample feature information and improve object detection accuracy by 2.97% compared to the benchmark method.

Highlights

  • Deep neural networks have been widely used in computer vision, such as posture recognition [1] and plant disease recognition [2], and object detection is the research hotspot in this field

  • (2) Experimental results showed that both the Semantic Feature Module (SFM) and the Detail Feature Module (DFM) could increase the accuracy of one-shot detection

  • A combination of the two modules could even increase the detection accuracy by 2.97% compared to the original algorithm

Read more

Summary

Introduction

Deep neural networks have been widely used in computer vision, such as posture recognition [1] and plant disease recognition [2], and object detection is the research hotspot in this field. E existing few-shot detection methods fall into three categories: finetuning, model structure-based learning, and metric-based learning. Existing metric-based few-shot detection mainly divides the dataset into the support set and the query set. It selects several image samples from the two sets to form the minimum training unit task (meta-task) and trains the model through specific strategies. E detection algorithm first obtains the corresponding features of the images in the two sets, measures the distance between the two features, and judges the object category according to the distance. As the current algorithm only conducts simple distance measurement, the utilization rate of object feature information is extremely low To solve this problem, this paper proposed a novel oneshot detection method on the basis of metric-based learning. A combination of the two modules could even increase the detection accuracy by 2.97% compared to the original algorithm

Related Works
Methods
Experiment
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.