Abstract
A depth map can be used in many applications such as robotic navigation, driverless, video production and 3D reconstruction. Both passive stereo and time‐of‐flight (ToF) cameras can provide the depth map for the captured real scenes, but they both have innate limitations. Since ToF cameras and passive stereo are intrinsically complementary for certain scenes, it is desirable to appropriately leverage all the available information by ToF cameras and passive stereo. As a result, this study proposes an approach to integrate ToF cameras and passive stereo to obtain high‐accuracy depth maps. The main contributions are: the first step is to design an energy cost function to utilise the depth map from ToF cameras to guide the stereo matching of passive stereo and the second step is to design their weight function for depth maps pixel‐level fusion. The experiments show that the proposed approach achieves the improved results with high accuracy and robustness.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.