Abstract

The past decade has witnessed significant progress in detecting small objects that are often distributed with large-scale variations in arbitrary orientations. However, most of the existing models are unable to detect extremely small objects, due to (1) tiny-scale objects with more blurring in comparison to ground-view objects provide less important information for precise and robust recognition; (2) Unevenly distributed objects make detection inefficient, particularly in regions occupied by dense objects, which lead to the common inconsistency between the classification score and localization accuracy. This research proposes an efficient YOLOv8 model with a new backbone network, and anchor-free head to address these issues. Bottleneck layers are used in the backbone, allowing the model to capture more complicated patterns and reduce the dimensionality of the feature maps while maintaining the same number of channels. Whereas an anchor-free head detection module does not require pre-defined anchor boxes, they directly predict objects’ center points and sizes and their corresponding class probabilities. This proposed model is experimentally verified on the VisDrone dataset and achieved higher mAP (45.9 %), and precision (76.8 %). These experimental results suggest that the proposed model outperforms existing models in detecting x-small items.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.