Abstract

Stereo matching methods that enable depth estimation are crucial for visualization enhancement applications in computer-assisted surgery (CAS). Learning-based stereo matching methods are promising to predict accurate results for applications involving video images. However, they require a large amount of training data, and their performance may be degraded due to domain shifts. Maintaining robustness and improving performance of learning-based methods are still open problems. To overcome the limitations of learning-based methods, we propose a disparity refinement framework consisting of a local disparity refinement method and a global disparity refinement method to improve the results of learning-based stereo matching methods in a cross-domain setting. Those learning-based stereo matching methods are pre-trained on a large public dataset of natural images and are tested on a dataset of laparoscopic images. Results from the SERV-CT dataset showed that our proposed framework can effectively refine disparity maps on an unseen dataset even when they are corrupted by noise, and without compromising correct prediction, provided the network can generalize well on unseen datasets. As such, our proposed disparity refinement framework has the potential to work with learning-based methods to achieve robust and accurate disparity prediction. Yet, as a large laparoscopic dataset for training learning-based methods does not exist and the generalization ability of networks remains to be improved, it will be beneficial to incorporate the proposed disparity refinement framework into existing networks for more accurate and robust depth estimation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.