Lightweight Network Architecture is essential for autonomous and intelligent monitoring of Unmanned Aerial Vehicles (UAVs), such as in object detection, image segmentation, and crowd counting applications. The state-of-the-art lightweight network learning based on Neural Architecture Search (NAS) usually costs enormous computation resources. Alternatively, low-performance embedded platforms and high-resolution drone images pose a challenge for lightweight network learning. To alleviate this problem, this paper proposes a new lightweight object detection model, called GhostShuffleNet (GSNet), for UAV images, which is built based on Zero-Shot Neural Architecture Search. This paper also introduces the new components which compose GSNet, namely GhostShuffle units (loosely based on ShuffleNetV2) and the backbone GSmodel-L. Firstly, a lightweight search space is constructed with the GhostShuffle (GS) units to reduce the parameters and floating-point operations (FLOPs). Secondly, the parameters, FLOPs, layers, and memory access cost (MAC) as constraints add to search strategy on a Zero-Shot Neural structure search algorithm, which then searches for an optimal network GSmodel-L. Finally, the optimal GSmodel-L is used as the backbone network and a Ghost-PAN feature fusion module and detection heads are added to complete the design of the lightweight object detection network (GSNet). Extensive experiments are conducted on the VisDrone2019 (14.92%mAP) dataset and the our UAV-OUC-DET (8.38%mAP) dataset demonstrating the efficiency and effectiveness of GSNet. The completed code is available at: https://github.com/yfq-yy/GSNet.
Read full abstract