Abstract

Accurate quantification of effusion volume is becoming more important as OA is increasingly recognized to have inflammatory components including joint effusion. Effective clinical management requires accurate quantification of features related to inflammation; particularly synovitis and joint effusion which represents a target for therapy. Rather than relying on highly variable subjective assessment of joint effusions, with the development of deep learning techniques we have witnessed increased automation of radiological assessments. However, most of these techniques use supervised learning and are trained on large annotated datasets. Acquiring “ground truth” labels for large datasets is expensive and time-consuming. This becomes even more complicated and challenging when it comes to detecting medical image features like effusion. Detecting effusions accurately requires a high level of expertise. Because pockets of joint fluid can be in unexpected locations, human experts frequently need to step back to look at the entire image stack multiple times to prevent inattentional error. Therefore, proposing a method for learning from limited labeled data is crucial for medical image analysis. To perform automated measurement of knee effusion volume using minimal labeled data. In this method, we aim to develop an alternative deep learning training strategy based on self-supervised pretraining on unlabeled knee MRI scans. This study requires two sets of data: 1) a large unlabeled dataset, and 2) a small labeled dataset. For the large unlabeled dataset, we have used 4 different OAI MRI sequence data (64k slices have been selected randomly). For the labeled dataset we have used low-resolution sagittal Turbo Spin Echo (TSE) MR sequences from the OAI dataset. Effusion regions were extracted by a trained musculoskeletal radiologist (DC) for 23 participants (a total of 31 scans) using an interactive software developed in-house. We have used 5 scans for the test and 26 scans (a total of 700 slices) for training. In this method, IMaskRCNN deep learning architecture with the ResNet-101 backbone have been used. The training process has been performed in two-phase, self-supervised pretraining, and fine-tuning. The self-supervised pretraining using the fill-blank method was performed on unlabeled images to learn visual representations in musculoskeletal MRIs. After pretraining, we fined-tuned last layers of the network (backbone weights have been frozen) using the training division of the small TSE dataset. The dice similarity coefficient, Intersection over union (IoU) was used to compare the result of the segmentation between the predicted output and the ground truth. Qualitative results of the proposed method is presented in the figure. As shown the network was able to provide a good prediction of effusion location and extracting it. The segmentation model using the proposed pretraining method was successful in finding the location of effusion for 90% of the slices with IoU = 60%. The dice score for the detected effusion is 0.70 indicates a high agreement between the automatic segmentation and the labels provided by the expert. Self-supervised pretraining on the OAI dataset, can be a solution to the scarcity of pixel-wise ground truth (GT) for effusion quantification. Alberta Innovates, AHS Chair in Diagnostic Imaging, Medical Imaging Consultants, CIHR. CORRESPONDENCE ADDRESS: banafshe.felfeliyan@ucalgary.ca

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.