Abstract
For the task of monocular depth estimation, self-supervised learning supervises training by calculating the pixel difference between the target image and the warped reference image, obtaining results comparable to those with full supervision. However, the problematic pixels in low-texture regions are ignored, since most researchers think that no pixels violate the assumption of camera motion, taking stereo pairs as the input in self-supervised learning, which leads to the optimization problem in these regions. To tackle this problem, we perform photometric loss using the lowest-level feature maps instead and implement first- and second-order smoothing to the depth, ensuring consistent gradients ring optimization. Given the shortcomings of ResNet as the backbone, we propose a new depth estimation network architecture to improve edge location accuracy and obtain clear outline information even in smoothed low-texture boundaries. To acquire more stable and reliable quantitative evaluation results, we introce a virtual data set in the self-supervised task because these have dense depth maps corresponding to pixel by pixel. We achieve performance that exceeds that of the prior methods on both the Eigen Splits of the KITTI and VKITTI2 data sets taking stereo pairs as the input.
Highlights
Science and Technology on Complex Electronic System Simulation Laboratory, Space Engineering University, Digital Media School, Beijing Film Academy, Beijing 100088, China
We describe the experimental proceres of monocular depth estimation using stereo pairs as the input in detail
We achieved the transformation between the two frames in one stereo pair through the baseline, without the need to estimate the pose between each other
Summary
Science and Technology on Complex Electronic System Simulation Laboratory, Space Engineering University, Digital Media School, Beijing Film Academy, Beijing 100088, China. The problematic pixels in low-texture regions are ignored, since most researchers think that no pixels violate the assumption of camera motion, taking stereo pairs as the input in self-supervised learning, which leads to the optimization problem in these regions. To tackle this problem, we perform photometric loss using the lowest-level feature maps instead and implement first- and second-order smoothing to the depth, ensuring consistent gradients ring optimization. Depth estimation from a single image is a critical computer vision task in helping computers to understand real scenes. Self-supervised approaches leverage geometrical constraints on stereo images or image sequences as the sole source of supervision
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have