Abstract

Anchor-free aerial object detection methods have recently attracted much attention due to their simplicity and efficiency. However, the performance is still unsatisfactory due to the following two main limitations. On the one hand, the anchor-free detector employs ordinary convolution layers with axis-aligned receptive fields to extract object features, resulting in lacking internal mechanisms to handle the rotation variance. On the other hand, the detector sacrifices much semantic information to achieve faster detection, leading to the inability to deal with objects’ high inter-class similarity and intra-class diversity. To address these issues, in this paper, we present a unique anchor-free detector, termed Rotation-Insensitive Point Representation (R2IPoints), of which a set of category-aware points are employed to encode the spatial and semantic information of the arbitrary-oriented objects. Specifically, we first devise a Stacked Rotation convolution Module (SRM) to encourage the learning of rotation-insensitive point representation by adaptively modelling orientation-agnostic interdependencies over stochastically rotated features. Meanwhile, we further introduce a Class-specific Semantic enhancement Module (CSM). It performs category-aware semantic activation to recalibrate features, thus enabling the point representation to be aware of object categories. Through jointly optimizing the two proposed modules in an end-to-end manner, R2IPoints could simultaneously generate rotation-insensitive and category-aware point representation. Extensive experiments on the challenging DIOR and DOTA datasets demonstrate the superiority of the proposed method. We achieve 72.7% mAP on DIOR and 74.34% mAP on DOTA, surpassing the baseline method of +2.4% mAP and +2.49% mAP, respectively. The code is available at https://github.com/shnew/R2IPoints.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call