Abstract

ABSTRACT 3D reconstruction is a useful tool for surgical planning and guidance. Supervised methods for disparity/depth estimation are the state of the art, with demonstrated performances far superior to all alternatives, such as self supervised and traditional geometric methods. However, supervised training requires large datasets, and in this field, data is lacking. In this paper, we investigate the learning of structured light projections to enhance the development of disparity estimation networks. Improving supervised learning on small datasets without needing to collect extra data. We first show that it is possible to learn the projection of structured light on a scene. Secondly, we show that the joint training of structured light and disparity, using a multi-task learning (MTL) framework, improves the learning of disparity. Our MTL setup outperformed the single task learning (STL) network in every validation test. Notably, in the generalisation test, the STL error was 1.4 times worse than that of the best MTL performance. A dataset containing stereoscopic images, disparity maps and structured light projections on medical phantoms and ex vivo tissue was created for evaluation together with virtual scenes. This dataset will be made publicly available in the future.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.