Abstract

Background and ObjectiveCarotid B-mode ultrasound (CBUS) imaging is often used to detect and assess atherosclerotic plaques. Doctors often need to segment plaques in the CBUS images to further examine them. Multiple studies have proposed two-dimensional CBUS plaque segmentation deep learning (DL)-based solutions, achieving promising results. In most of these studies, image standardization is not reported, while not all plaque types are represented. However, prior multiple studies have highlighted the importance of data standardization in computerized CBUS plaque classification or segmentation solutions. In this study, we propose and separately evaluate three progressive preprocessing schemes, to discover the most optimal to standardize CBUS images for DL-based carotid plaque segmentation, while we also assess the effect of each preprocessing in the segmentation performance per echodensity-based plaque types (I, II, III, IV and V). MethodsWe included three CBUS image datasets (276 CBUS images, from three medical centres), with which we produced 3 data folds (with the best possible equal inclusion of images from all centers per fold), to perform 3-fold cross validation-based training and evaluation of the pre-released Channel-wise Feature Pyramid Network for Medicine (CFPNet-M) model, in carotid plaque type segmentation. We included the three data folds in their original version (O), generating also three preprocessed versions of them, namely, the resolution-normalized (R), the resolution- and intensity-normalized (RN), and the resolution- and intensity-normalized combined with despeckling (RND) version. The samples were cropped to the plaque level, and the intersection over union (IoU) and the Dice Similarity Coefficient (DSC), along with other metrics, were used to measure the model's performance. In each training round, 12% of the images in the 2 training folds was used for internal validation (last fold was used in evaluation). Two experienced ultrasonographers manually delineated plaques in the dataset, to provide us with ground truths, while the plaque types (I to V) were extracted according to the Gray-Weale classification system. We measured the mean±standard deviation of DSC within and across the three evaluated folds, per preprocessing scheme and per plaque type. ResultsCFPNet-M segmented the plaques in the CBUS images in all the data preprocessing versions, yielding progressively improved performances (mean DSC at 81.9±9.1%, 83.6±9.0%, 84.1±8.3%, and 84.4±8.1% for the O, R, RN and RND 3-fold cross validation processes, respectively), irrespective of the plaque type. Interestingly, CFPNet_M yielded improved performances, for all plaque types (I, II, III, IV and V), when trained and tested with the RND data versus the O version, achieving an 80.6±11% versus 77.6±17% DSC for type I, an 84.3±8% versus 81.2±9% DSC for type II, an 84.9±7% versus 82.6±7% for type III, an 85.3±8% versus 83.9±7% for type IV, and a 84.8±8% versus 81.8±2% for type V. The best increase in DSC, from the O to the RND CBUS images, was found for the plaque type I (3.86% increase), with types II and V, following. ConclusionsIn this study, we investigated the impact of CBUS standardization in DL-based carotid plaque type segmentation and showed that indeed normalization of the image resolution and intensity, combined with speckle noise removal, prior to model training and testing, enhances the DL model's performance, across all plaque types. Based on the findings in this study, CBUS images should be standardized when destined for DL-based segmentation tasks, while all plaque types should be considered, as in a plethora of existing relevant studies, uniformly echolucent plaques or heavily calcified plaques with acoustic shadow are notably underrepresented.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.