Abstract
To find an economical solution to infer the depth of the surrounding environment of unmanned agricultural vehicles (UAV), a lightweight depth estimation model called MonoDA based on a convolutional neural network is proposed. A series of sequential frames from monocular videos are used to train the model. The model is composed of two subnetworks—the depth estimation subnetwork and the pose estimation subnetwork. The former is a modified version of U-Net that reduces the number of bridges, while the latter takes EfficientNet-B0 as its backbone network to extract the features of sequential frames and predict the pose transformation relations between the frames. The self-supervised strategy is adopted during the training, which means the depth information labels of frames are not needed. Instead, the adjacent frames in the image sequence and the reprojection relation of the pose are used to train the model. Subnetworks’ outputs (depth map and pose relation) are used to reconstruct the input frame, then a self-supervised loss between the reconstructed input and the original input is calculated. Finally, the loss is employed to update the parameters of the two subnetworks through the backward pass. Several experiments are conducted to evaluate the model’s performance, and the results show that MonoDA has competitive accuracy over the KITTI raw dataset as well as our vineyard dataset. Besides, our method also possessed the advantage of non-sensitivity to color. On the computing platform of our UAV’s environment perceptual system NVIDIA JETSON TX2, the model could run at 18.92 FPS. To sum up, our approach provides an economical solution for depth estimation by using monocular cameras, which achieves a good trade-off between accuracy and speed and can be used as a novel auxiliary depth detection paradigm for UAVs.
Highlights
Depth information has been proved extremely useful in various computer vision and robotic tasks, and it is one of the essential research aspects in the field of unmanned agriculture vehicles (UAV) [1]
An environment perceptual system combining lidar and cameras is designed for our UAV
A lightweight self-supervised convolutional neural networks (CNN) model is proposed as an economical method to estimate the distances of obstacles around the UAV, which can predict a depth map from a single RGB method
Summary
Depth information has been proved extremely useful in various computer vision and robotic tasks, and it is one of the essential research aspects in the field of unmanned agriculture vehicles (UAV) [1]. Researchers found that the accuracy of depth information can be effectively improved by using several depth detectors simultaneously [17]. According to this discovery, an environment perceptual system combining lidar and cameras is designed for our UAV. NVIDIA’s edge computing device—JETSON TX2 is chosen as the computing platform to carry out the depth estimation mission. This paradigm combines the advantages of cheapness, compact size and high energy efficiency
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.