Abstract

The research problems on Object detection have been attracted with major issues in the computer vision domain. Object detection based on images from unmanned aerial vehicles (UAV) - drones, has versatile applications in both defence security, agriculture and GIS. However, real-time object detection in UAV scenarios remains quite a tedious problem due to environmental obstructions such as occlusion and view-invariant conditions despite the high number of solutions proposed to solve this task. This paper proposes an improved YOLOv3-tiny object detector by introducing a multi-dilated module between the convolution unit and the receptive field, where the problem of a small number of positive training samples is solved by a larger size of the predicted feature map thereby reducing the rate of label rewriting in YOLOv3-tiny. We find that the fusion of multi-scale receptive fields is effective in detecting even every single tiny object. We introduce a path aggregation module that merges the semantic information in a deeper layer and detailed information in an earlier layer. The analysis of the proposed solution shows that on the VisDrone2019-Det test set, our proposed model is more efficient and effective, running 2.96% times faster and increasing 4.0% AP50 than YOLOv3.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.