Abstract

Here, we demonstrate how deep neural network (DNN) detections of multiple constitutive or component objects that are part of a larger, more complex, and encompassing feature can be spatially fused to improve the search, detection, and retrieval (ranking) of the larger complex feature. First, scores computed from a spatial clustering algorithm are normalized to a reference space so that they are independent of image resolution and DNN input chip size. Then, multiscale DNN detections from various component objects are fused to improve the detection and retrieval of DNN detections of a larger complex feature. We demonstrate the utility of this approach for broad area search and detection of surface-to-air missile (SAM) sites that have a very low occurrence rate (only 16 sites) over a $\sim$ 90 000 km2 study area in SE China. The results demonstrate that spatial fusion of multiscale component-object DNN detections can reduce the detection error rate of SAM Sites by $>$ 85% while still maintaining a 100% recall. The novel spatial fusion approach demonstrated here can be easily extended to a wide variety of other challenging object search and detection problems in large-scale remote sensing image datasets.

Highlights

  • Within the last five years, Deep Neural Networks (DNN) have shown through extensive experimental validation to deliver outstanding performance for object detection/recognition in a variety of benchmark high-resolution remote sensing image datasets [1]-[7]

  • This F1 score is less than half of the maximum in Table VI, this technique still achieved a 88.5% relative error reduction compared to the baseline results for the candidate Surface-to-Air Missile (SAM) Site locations within the SE China area of interest (AOI)

  • We significantly improved upon this prior study by using multiple DNNs to detect smaller component objects, e.g. Launch Pads, Transporter Erector Launchers (TELs), etc. belonging to the larger and more complex SAM Site feature

Read more

Summary

Introduction

Within the last five years, Deep Neural Networks (DNN) have shown through extensive experimental validation to deliver outstanding performance for object detection/recognition in a variety of benchmark high-resolution remote sensing image datasets [1]-[7]. Methods such as You Only Look Once (YOLO) [8], region-based CNN (R-CNN) [9], and derivations thereof [10]-[15] have all shown promising results for a variety of object detection applications in remote sensing imagery. Since “large-scale” or “broad area” are subjective descriptors, here we define these to be applications where the algorithm is applied to validation image datasets, i.e. excluding training data, covering an area greater than 1,000 km

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.