Abstract

Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid—we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature.

Highlights

  • Two-image stereomatching has received much attention among passive depth estimation methods, in large part because of the practicality of the set-up

  • We take a look at the monotonicity constraint and illustrate its effectiveness at estimating camera positions far from the reference camera position

  • We propose a novel technique to estimate camera locations in a multi-view stereomatching set-up

Read more

Summary

Introduction

Two-image stereomatching has received much attention among passive depth estimation methods, in large part because of the practicality of the set-up. Multi-image depth estimation has access to more data for estimating the disparity and is expected to deliver better results. Stereomatching performs depth estimation by detecting objects in two images from a different location. The disparity between these locations is inversely proportional to the distance of the object from the baseline, and directly proportional to the width of the baseline. Objects that only appear in one of the two views are troublesome: these occlusions are an important problem encountered by two-image stereomatching. Evidence of the expansive existing literature on that topic: it is one of the main discriminating features between the many stereomatching techniques

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.