Abstract

In recent years, the research method of depth estimation of target images using Convolutional Neural Networks (CNN) has been widely recognized in the fields of artificial intelligence, scene understanding and three-dimensional (3D) reconstruction. The fusion of semantic segmentation information and depth estimation will further improve the quality of acquired depth images. However, how to deeply combine image semantic information with image depth information and use image edge information more accurately to improve the accuracy of depth image is still an urgent problem to be solved. For this purpose, we propose a novel depth estimation model based on semantic segmentation to estimate the depth of monocular images in this paper. Firstly, a shared parameter model of semantic segmentation information and depth estimation information is built, and the semantic segmentation information is used to guide depth acquisition in an auxiliary way. Then, through the multi-scale feature fusion module, the feature information contained in the neural network on different layers is fused, and the local feature information and global feature information are effectively used to generate high-resolution feature maps, so as to achieve the goal of improving the quality of depth image by optimizing the semantic segmentation model. The experimental results show that the model can fully extract and combine the image feature information, which improves the quality of monocular depth vision estimation. Compared with other advanced models, our model has certain advantages.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call