Abstract

Panoramic depth estimation is a crucial technique in 360°perception, which has gained great popularity in recent years due to its ability to provide complete visual coverage of the scene. Panoramic images are commonly presented in the form of equirectangular projection. However, this projection exhibits significant distortions towards the poles, making it unsuitable to directly apply traditional depth estimation frameworks for perspective imaging. The cubemap projection offers distortion-free images on each face but exhibits obvious discontinuities between adjacent faces. Therefore, according to the respective advantages of these two projection methods, we propose an asymmetric fusion network (AFNet), where the fusion information is only fed back to the equirectangular projection branch during the encoding stage to guide the subsequent encoding. We also design a dual projection feature fusion (DPFF) module in our framework to better combine the strengths of both projections. Furthermore, we introduce an edge information enhancement (EIE) module to enhance the edge contours of the final depth map. Experiments on four representative datasets (including both synthetic and real-world scenarios) show that our proposed AFNet achieves favorable performance compared with existing methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call