Abstract

Unmanned aerial vehicle (UAV) is one of the main means of information warfare, such as in battlefield cruises, reconnaissance, and military strikes. Rapid detection and accurate recognition of key targets in UAV images are the basis of subsequent military tasks. The UAV image has characteristics of high resolution and small target size, and in practical application, the detection speed is often required to be fast. Existing algorithms are not able to achieve an effective trade-off between detection accuracy and speed. Therefore, this paper proposes a parallel ensemble deep learning framework for unmanned aerial vehicle video multi-target detection, which is a global and local joint detection strategy. It combines a deep learning target detection algorithm with template matching to make full use of image information. It also integrates multi-process and multi-threading mechanisms to speed up processing. Experiments show that the system has high detection accuracy for targets with focal lengths varying from one to ten times. At the same time, the real-time and stable display of detection results is realized by aiming at the moving UAV video image.

Highlights

  • Unmanned aerial vehicle (UAV) have been widely used in photography due to their small size, fast movement speed, wide coverage, etc. [1,2,3,4,5,6,7,8]

  • Combining image processing technology and pattern recognition methods to analyze drone videos or images to achieve fast and stable target detection is the basis for advanced military tasks, such as subsequent battlefield environment awareness, the guidance of individual soldier operations, and rapid target targeting

  • A very classic RPN network was designed to extract the proposal, which unified the regions of interest (ROIs) region extraction, feature extraction and expression, candidate region classification, and location refinement into a deep network, and accelerated the training time by 250 times compared with R-convolutional neural network (CNN); the target detection speed reached a speed of 5 fps, which achieves the double improvement of speed and accuracy

Read more

Summary

Introduction

UAVs have been widely used in photography due to their small size, fast movement speed, wide coverage, etc. [1,2,3,4,5,6,7,8]. The performance of deep neural networks in classification tasks proves the excellent ability of feature extraction and expression, so it has attracted extensive research in the field of target detection. A very classic RPN network was designed to extract the proposal, which unified the ROI region extraction, feature extraction and expression, candidate region classification, and location refinement into a deep network, and accelerated the training time by 250 times compared with R-CNN; the target detection speed reached a speed of 5 fps, which achieves the double improvement of speed and accuracy. Considering the advantages and disadvantages of deep learning in processing images, combining template matching algorithms, and adding local and global joint detection strategies achieves real-time stable and accurate detection and recognition of UAV ground targets. The real-time and stable display of detection results is realized by aiming at the moving UAV video image

Proposed Recognition Network
Proposed
Design of image recognition backbone
Itdiscarding can be seenthe from
Basicofstructure of low-resolution remote sensing image recognition
Proposed Parallel Computation Framework
Local Object Detection Method Based on Deep Learning
10. Faster
design of of anchor
Comparison
Global Information Integration and Ground Station Display
Verification Conditions
Experimental
23. Accuracy
Methods
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.