Abstract

Ultrasound imaging is a commonly used technology for visualising patient anatomy in real-time during diagnostic and therapeutic procedures. High operator dependency and low reproducibility make ultrasound imaging and interpretation challenging with a steep learning curve. Automatic image classification using deep learning has the potential to overcome some of these challenges by supporting ultrasound training in novices, as well as aiding ultrasound image interpretation in patient with complex pathology for more experienced practitioners. However, the use of deep learning methods requires a large amount of data in order to provide accurate results. Labelling large ultrasound datasets is a challenging task because labels are retrospectively assigned to 2D images without the 3D spatial context available in vivo or that would be inferred while visually tracking structures between frames during the procedure. In this work, we propose a multi-modal convolutional neural network (CNN) architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure. We use a CNN composed of two branches, one for voice data and another for image data, which are joined to predict image labels from the spoken names of anatomical landmarks. The network was trained using recorded verbal comments from expert operators. Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels. We conclude that the addition of spoken commentaries can increase the performance of ultrasound image classification, and eliminate the burden of manually labelling large EUS datasets necessary for deep learning applications.

Highlights

  • U LTRASOUND (US) imaging is a safe, non-invasive and cost-effective technology for visualising patient anatomy in real-time

  • We demonstrate that real-time noisy intraoperative voice commentaries can provide an easy way to obtain US labels, even when using a small dataset

  • The best performance was obtained with the model trained on both image and voice without using pre-trained weights

Read more

Summary

Introduction

U LTRASOUND (US) imaging is a safe, non-invasive and cost-effective technology for visualising patient anatomy in real-time. US scanning is highly operatordependent and images can be difficult to interpret, requiring extensive training with a long learning curve [1]. To address these challenges in US-guided procedures, several simulators have been proposed. Even after completing the recommended training, a clinician may find it difficult to perform the examination confidently [1]. With the ubiquitous use of US imaging, there is a need to develop tools that can assist clinicians during these procedures

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.