Abstract

A depth map can be used in many applications such as robotic navigation, driverless, video production and 3D reconstruction. Both passive stereo and time‐of‐flight (ToF) cameras can provide the depth map for the captured real scenes, but they both have innate limitations. Since ToF cameras and passive stereo are intrinsically complementary for certain scenes, it is desirable to appropriately leverage all the available information by ToF cameras and passive stereo. As a result, this study proposes an approach to integrate ToF cameras and passive stereo to obtain high‐accuracy depth maps. The main contributions are: the first step is to design an energy cost function to utilise the depth map from ToF cameras to guide the stereo matching of passive stereo and the second step is to design their weight function for depth maps pixel‐level fusion. The experiments show that the proposed approach achieves the improved results with high accuracy and robustness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call