Abstract

Object detection has developed rapidly with the help of deep learning technologies recent years. However, object detection on drone view remains challenging due to two main reasons: (1) It is difficult to detect small-scale objects lacking detailed information. (2) The diversity of camera angles of drones brings dramatic differences in object scale. Although feature pyramid network (FPN) alleviates the problem caused by scale difference to some extent, it also retains some worthless features, which wastes resources and slows down the speed. In this work, we propose a novel High-Resolution Feature Pyramid Network (HR-FPN) to improve the detection accuracy of small-scale objects and avoid feature redundancy. The key components of HR-FPN include a high-resolution feature alignment module (HRFA), a high-resolution feature fusion module (HRFF) and a multi-scale decoupled head (MSDH). HRFA feeds multi-scale features from backbone into parallel resampling channels to obtain high-resolution features at the same scale. HRFF establishes a bottom-up path to distribute context-rich low-level semantic information to all layers that are then aggregated into classification feature and localization feature. MSDH cope with the scale difference of objects by predicting the categories and locations corresponding to different scales of objects separately. Moreover, we train model by scale-weighted loss to focus more on small-scale objects. Extensive experiments and comprehensive evaluations demonstrate the effectiveness and advancement of our approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call