Abstract

The key to the depth estimation from a single image lies in inferring the distance of various objects without copying texture while maintaining clear object boundaries. In this paper, we propose depth estimation from a single image using edge extraction network and dark channel prior (DCP). We build an edge extraction network based on generative adversarial networks (GANs) to select valid depth edges from a number of edges in an image. We use DCP to generate a transmission map that is able to represent distance from the camera. Transmission map is generated by conducting minimum value filtering on DCP. First, we concatenate the transmission map with the original RGB image to form a tensor, i.e. RGB + T. Second, we generate an initial depth image from the tensor through the generator by inferring depth from stacked residual blocks. Third, we compare the edge map of the initial depth image with that of the input RGB image to select valid depth edges. Both edge maps are generated by the edge extraction network. Finally, we distinguish real and fake on the generated depth image using a discriminator, and enhances the performance of the generator. Various experiments on NYU, Make3D and MPI Sintel datasets demonstrate that the proposed network generates clear edges in depth images as well as outperforms state-of-the-art methods in terms of visual quality and quantitative measurements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call