Abstract

The goal of this study was to see how well the multi-annotator segmentation framework performed for hippocampal head and body segmentation. In order to execute organ segmentation separately on the input MR image, the multi-annotator segmentation framework employs a number of distinct deep learning-based models with diverse loss functions, algorithms, and architectures. Each deep learning model generates a probability map, which is then fused by decision fusion algorithms or a deep learning-based combiner to produce a single binary map of the target organ/structure. The purpose of the multi-annotator segmentation method is to make use of the complimentary information generated/provided by the several independent annotators (independent deep learning models). This approach would profit from the synergistic impact of integrating the different annotators' decisions to achieve a better performance than each annotator working alone. For the delineation of the hippocampus head and body from MR images, the performance of the suggested multi-annotator approach was examined. Other segmentation approaches were used for evaluation, including a multi-view deep learning-based technique, Atlas-based methods, shape-based averaging (SBA), STAPLE, and majority voting. A variety of deep learning methods (including several architectures and loss functions) were implemented, with a residual architecture (ResNet-CE) with dilated convolutional kernels and a cross-entropy loss function demon-strating higher performance as a standalone model. For the body and head of the hippocampus, the ResNet-CE model attained Dice indices of 88 ±61.7 and 88 ±71.5, respectively. Overall, the suggested multi-annotator segmentation strategy beat other segmentation approaches with Dice indices of 91 ±01.3 for the body and 91 ±11.3 for the head, containing six separate deep learning models, including the ResNet model. Dice indices of 88 ±41.5 for the body and 88 ±51.5 for the head were obtained using the best atlas-based approach, and Dice indices of 88 ±91.5 (body) and 89 ±01.4 (head) were obtained using the multi-view segmentation method (head). This research showed that the proposed multi-annotator technique for seminal segmentation outperforms each of the annotators individually (independent deep learning models which are included in the multi-annotator approach). The proposed method could be used to improve the overall accuracy and performance of machine learning and/or deep learning-based approaches in seminal segmentation challenges.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.