Abstract
The unmanned aerial vehicle (UAV)-captured panoptic remote sensing images have great potential to promote robotics-inspired intelligent solutions for land cover mapping, disaster management, smart agriculture through automatic vegetation detection, and real-time environmental surveillance. However, many of these applications require faster execution of the tasks to get the job done in real-time. In this regard, this article proposes a lightweight convolutional neural network (CNN) architecture, also known as LW-AerialSegNet, that helps preserve the network’s feed-forward nature by increasing the intermediate layers to gather more crucial features for the segmentation task. Moreover, the network uses the concept of densely connected architecture and depth-wise separable convolution mechanisms to reduce the number of parameters of the model that can be deployed in the internet of things (IoT) edge devices to perform real-time segmentation. A UAV-based image segmentation dataset NITRDrone Dataset and Urban Drone dataset (UDD) are used to evaluate the proposed architecture. It has achieved an intersection over union (IoU) of 82% and 71% on the NITRDrone datasets UDD, respectively, thereby illustrating its superiority among the considered state-of-the-art mechanisms. The experimental results indicate that the implementation of depth-wise separable convolutions helps reduce the number of trainable parameters significantly, making it suitable to be applied on edge-computing devices at a smaller scale. The proposed architecture can be deployed in real-life settings on a UAV to extract objects such as vegetation and road lines, hence can be used in mapping urban areas, agricultural lands, etc.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have