Abstract

In recent intelligent-robot-assisted surgery studies, an urgent issue is how to detect the motion of instruments and soft tissue accurately from intra-operative images. Although optical flow technology from computer vision is a powerful solution to the motion-tracking problem, it has difficulty obtaining the pixel-wise optical flow ground truth of real surgery videos for supervised learning. Thus, unsupervised learning methods are critical. However, current unsupervised methods face the challenge of heavy occlusion in the surgical scene. This paper proposes a novel unsupervised learning framework to estimate the motion from surgical images under occlusion. The framework consists of a Motion Decoupling Network to estimate the tissue and the instrument motion with different constraints. Notably, the network integrates a segmentation subnet that estimates the segmentation map of instruments in an unsupervised manner to obtain the occlusion region and improve the dual motion estimation. Additionally, a hybrid self-supervised strategy with occlusion completion is introduced to recover realistic vision clues. Extensive experiments on two surgical datasets show that the proposed method achieves accurate motion estimation for intra-operative scenes and outperforms other unsupervised methods, with a margin of 15% in accuracy. The average estimation error for tissue is less than 2.2 pixels on average for both surgical datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call