Abstract

Object detection plays an important role in the field of remote sensing imagery analysis. The most challenging issues in advancing this task are the large variation in object scales and the arbitrary orientation of objects. In this paper, we build a unified framework upon the region-based convolutional neural network for arbitrary-oriented and multi-scale object detection in remote sensing images. To handle the problem of multi-scale object detection, a feature-fusion architecture is proposed to generate a multi-scale feature hierarchy, which augments the features of shallow layers with semantic representations via a top-down pathway and combines the feature maps of top layers with low-level information by a bottom-up pathway. By combining features of different levels, we can form a powerful feature representation for multi-scale objects. Most previous methods locate objects with arbitrary orientations and dense spatial distributions via axis-aligned boxes, which may cover adjacent instances and background areas. We build a rotation-aware object detector that uses oriented boxes to localize objects in remote sensing images. The region proposal network augments the anchors with multiple default angles to cover oriented objects. It utilizes oriented proposal boxes to enclose objects rather than horizontal proposals that coarsely locate oriented objects. The orientation RoI pooling operation is introduced to extract the feature maps of oriented proposals for the following R-CNN subnetwork. We conduct comprehensive experiments on a public dataset for oriented object detection in remote sensing images. Our method achieves state-of-the-art performance, which demonstrates the effectiveness of the proposed methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call