Abstract

Segmentation and the subsequent quantitative assessment of the target object in computed tomography (CT) images provide valuable information for the analysis of intracerebral hemorrhage (ICH) pathology. However, most existing methods lack a reasonable strategy to explore the discriminative semantics of multi-scale ICH regions, making it difficult to address the challenge of complex morphology in clinical data. In this paper, we propose a novel multi-scale object equalization learning network (MOEL-Net) for accurate ICH region segmentation. Specifically, we first introduce a shallow feature extraction module (SFEM) for obtaining shallow semantic representations to maintain sufficient and effective detailed location information. Then, a deep feature extraction module (DFEM) is leveraged to extract the deep semantic information of the ICH region from the combination of SFEM and original image features. To further achieve equalization learning in different scales of ICH regions, we introduce a multi-level semantic feature equalization fusion module (MSFEFM), which explores the equalized fusion features of the described objects with the assistance of shallow and deep semantic information provided by SFEM and DFEM. Driven by the above three designs, MOEL-Net shows a solid capacity to capture more discriminative features in various ICH region segmentation. To promote the research of clinical automatic ICH region segmentation, we collect two datasets, VMICH and FRICH (divided into Test A and Test B) for evaluation. Experimental results show that the proposed model achieves the Dice scores of 88.28%, 90.92%, and 90.95% on the VMICH, FRICH Test A, and Test B, respectively, which outperform fourteen competing methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.