Abstract

To provide a safe alternative, for intra-operative fluoroscopy, ultrasound (US) has been investigated as an alternative safe imaging modality for various computer assisted orthopedic surgery (CAOS) procedures. However, low signal to noise ratio, imaging artifacts and bone surfaces appearing several millimeters (mm) in thickness have hindered the wide spread application of US in CAOS. In order to provide a solution for these problems, research has focused on the development of accurate, robust and real-time bone segmentation methods. Most recently methods based on deep learning have shown very promising results. However, scarcity of bone US data introduces significant challenges when training deep learning models. In this work, we propose a computational method, based on a novel generative adversarial network (GAN) architecture, to (1) produce synthetic B-mode US images and (2) their corresponding segmented bone surface masks in real-time. We show how a duality concept can be implemented for such tasks. Armed by two convolutional blocks, referred to as self-projection and self-attention blocks, our proposed GAN model synthesizes realistic B-mode bone US image and segmented bone masks. Quantitative and qualitative evaluation studies are performed on 1235 scans collected from 27 subjects using two different US machines to show comparison results of our model against state-of-the-art GANs for the task of bone surface segmentation using U-net.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call