Abstract

Domain Adaptive Object Detection (DAOD) alleviates the reliance on labeled data by transferring knowledge learned from labeled source domain to unlabeled target domain. Recent DAOD methods is modeled mainly based on ground-level images. Compared to ground-level images, aerial images suffer from scale variation and viewpoint diversity. This means that domain adaptive object detection in aerial images is a more challenging task. In this work, we address the domain shift in aerial images from two levels: 1) image-level shifts, such as weather, lighting, viewpoint, etc., 2) instance-level shifts, such as object appearance, scale, etc. Specifically, multiple domain-confusion classifiers are designed to learn image-level common knowledge of the source and target domains. Different levels of domain classifiers are further assigned adaptive weights to coordinate the transferability and discriminability of the adaptive detectors. Meanwhile, instance-level alignment is realized by forcing the intrinsic relationship between classes in both domains to be consistent. In addition, we perform instance-level alignment in different semantic-level feature layers to improve the scale awareness of the adaptation model. Extensive experimental results on VisDrone, UAVDT, DIOR and DOTA datasets demonstrate that our method achieves optimal detection performance in four domain adaption scenarios compared to other state-of-the-art methods, e.g., in Daytime → Night (VisDrone), the mAP50 is 23.5 %; in VisDrone → UAVDT, DIOR → UAVDT and DOTA → VisDrone, the AP50 of the car is 63.1 %, 46.6 % and 44.8 % respectively. Code will be available online (https://github.com/MaYou1997/HANet).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call