Abstract

Oriented object detection in remote sensing image interpretation is challenging due to the difficulty of locating objects with arbitrary orientations. Existing methods have made considerable progress based on oriented heads or anchors. However, most of them follow the classical detection paradigm, such as assigning samples based on Intersection-over-Unions (IoU) and predicting through two independent tasks. These fixed strategies impair the consistency between classification and localization predictions, resulting in the prediction with optimal localization accuracy being suppressed by the non-optimal ones during Non-Maximum Suppression (NMS). In order to address this problem, a Task-Collaborated Detector (TCD) is proposed. Compared with current single-stage methods, its improvements include two aspects: Task-Collaborated Assignment (TCA) and Task-Collaborated Head (TCH). Specifically, in order to better pull closer the best anchors for two tasks, TCA introduces classification and localization confidence into sample assignment and tends to select the anchors with accurate and consistent predictions as positive during training. TCH provides a better balance for learning interactive and discriminative features. It can flexibly adjust the spatial feature distribution of classification and localization tasks by learning the joint features from the aggregation layer. Extensive experiments are conducted on HRSC2016, DOTA, and DIOR-R, and the proposed TCD achieves state-of-the-art performance (90.60, 80.89, and 65.04 mAP, respectively). Consistency analysis also demonstrates that TCD can significantly improve prediction consistency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call