Abstract

Aircraft detection in remote sensing images is significant in both military and civilian fields, such as air traffic control and battlefield dynamic monitoring. Deep learning methods can achieve promising detection performance with sufficient and labeled samples. However, current aircraft datasets are mainly from a single data source and lack diverse scenes and targets, making it difficult to train a robust and generalized detector. Therefore, we manually label and construct a complex optical remote sensing aircraft target detection dataset (CORS-ADD) from Google Earth and multiple satellites such as WorldView-2, WorldView-3, Pleiades, Jilin-1, and IKONOS. It contains 7,337 images covering typical airports and various rare scenes, including the aircraft carrier, ocean and land with flying aircraft. The dataset consists of 32,285 civil and military aircraft instances, including bombers, fighters, and early warning aircraft. These targets range from 4×4 pixels to 240×240 pixels and are all labeled with both horizontal bounding box (HBB) and oriented bounding box (OBB) annotations. The various scenes and sufficient instances can fully support the training and evaluation of data-driven algorithms. Meanwhile, based on the constructed dataset, we train and evaluate several detectors to provide a benchmark and help promote the development of aircraft detection techniques.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.