Abstract

Vehicle detection in Unmanned Aerial Vehicle (UAV) imagery plays a crucial role in a variety of applications. However, UAVs are usually small, very maneuverable, and can take images from a variety of viewpoints and heights, leading to large differences in vehicle appearance and size. To address the vehicle detection challenge with such diversity in UAV images, we seek to align features between different viewpoints, illumination, weather, and background using remote sensing imagery as an anchor. Following this domain adaptation concept, we propose a multi-scale adversarial network, consisting of a deep convolutional feature extractor, a multi-scale discriminator, and a vehicle detection network. Specifically, the feature extractor is a Siamese network with one path for the UAV imagery and another for the satellite imagery. The shared weights in this sub-network allow us to exploit the large collections of labeled remote sensing imagery for improved vehicle detection in UAV imagery. Experimental results suggest that our proposed algorithm improves the vehicle detection accuracy in the UAVDT dataset and VisDrone dataset. The proposed model achieves great performance in images taken from different perspectives, at different altitudes, and under different imaging situations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call