Abstract

AbstractUAV vehicle detection based on convolutional neural network exits a key problem of information imbalance of different feature layers. Shallow features have spatial information that is beneficial to localization, but lack semantic information. On the contrary, deep features have semantic information that is beneficial to classification, but lack spatial information. However, accurate classification and localization for UAV vehicle detection require both shallow spatial information and high semantic information. In our work, a bi-directional information guidance network (BDIG-Net) for UAV vehicle detection is proposed, which can ensure that each feature prediction layer has abundant mid-/low-level spatial information and high-level semantic information. There are two main parts in the BDIG-Net: shallow-level spatial information guidance part and deep-level semantic information guidance part. In the shallow-level guidance part, we design a feature transform module (FTM) to supply the mid-/low-level feature information, which can guide the BDIG-Net to enhance detailed and spatial features for deep features. Furthermore, we adopt a light-weight attention module (LAM) to reduce unnecessary shallow background information, making the network more focused on small-sized vehicles. In the deep-level guidance part, we use classical feature pyramid network to supply high-level semantic information, which can guide the BDIG-Net to enhance contextual information for shallow features. Meanwhile, we design a feature enhancement module (FEM) to suppress redundant features and improve the discriminability of vehicles. The proposed BDIG-Net can reduce the information imbalance. The experimental results show that the BDIG-Net can achieve accurate classification and localization for UAV vehicles and realize the real-time application requirements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call