Abstract
Solving the depth estimation problem in a 360° image space, which has holistic scene perception, has become a trend in recent years. However, depth estimation in common 360° images is prone to geometric distortion. Therefore, this study proposes a new method, CAPDepth, to address the geometric-distortion problem of 360° monocular depth estimation. We reduce the tangential projections by an optimized content-aware projection (CAP) and a geometric embedding module to capture more features for global depth consistency. Additionally, we adopt an index map and a de-blocking scheme to improve the inference efficiency and quality of our CAPDepth model. Our experiments show that CAPDepth greatly alleviates the distortion problem, producing smoother, more accurate predicted depth results, and improves performance in panoramic depth estimation.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have