Abstract

To find an economical solution to infer the depth of the surrounding environment of unmanned agricultural vehicles (UAV), a lightweight depth estimation model called MonoDA based on a convolutional neural network is proposed. A series of sequential frames from monocular videos are used to train the model. The model is composed of two subnetworks—the depth estimation subnetwork and the pose estimation subnetwork. The former is a modified version of U-Net that reduces the number of bridges, while the latter takes EfficientNet-B0 as its backbone network to extract the features of sequential frames and predict the pose transformation relations between the frames. The self-supervised strategy is adopted during the training, which means the depth information labels of frames are not needed. Instead, the adjacent frames in the image sequence and the reprojection relation of the pose are used to train the model. Subnetworks’ outputs (depth map and pose relation) are used to reconstruct the input frame, then a self-supervised loss between the reconstructed input and the original input is calculated. Finally, the loss is employed to update the parameters of the two subnetworks through the backward pass. Several experiments are conducted to evaluate the model’s performance, and the results show that MonoDA has competitive accuracy over the KITTI raw dataset as well as our vineyard dataset. Besides, our method also possessed the advantage of non-sensitivity to color. On the computing platform of our UAV’s environment perceptual system NVIDIA JETSON TX2, the model could run at 18.92 FPS. To sum up, our approach provides an economical solution for depth estimation by using monocular cameras, which achieves a good trade-off between accuracy and speed and can be used as a novel auxiliary depth detection paradigm for UAVs.

Highlights

  • Depth information has been proved extremely useful in various computer vision and robotic tasks, and it is one of the essential research aspects in the field of unmanned agriculture vehicles (UAV) [1]

  • An environment perceptual system combining lidar and cameras is designed for our UAV

  • A lightweight self-supervised convolutional neural networks (CNN) model is proposed as an economical method to estimate the distances of obstacles around the UAV, which can predict a depth map from a single RGB method

Read more

Summary

Introduction

Depth information has been proved extremely useful in various computer vision and robotic tasks, and it is one of the essential research aspects in the field of unmanned agriculture vehicles (UAV) [1]. Researchers found that the accuracy of depth information can be effectively improved by using several depth detectors simultaneously [17]. According to this discovery, an environment perceptual system combining lidar and cameras is designed for our UAV. NVIDIA’s edge computing device—JETSON TX2 is chosen as the computing platform to carry out the depth estimation mission. This paradigm combines the advantages of cheapness, compact size and high energy efficiency

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call