Abstract

In medical image segmentation, accuracy is commonly high for tasks involving clear boundary partitioning features, as seen in the segmentation of X-ray images. However, for objects with less obvious boundary partitioning features, such as skin regions with similar color textures or CT images of adjacent organs with similar Hounsfield value ranges, segmentation accuracy significantly decreases. Inspired by the human visual system, we proposed the multi-scale detail enhanced network. Firstly, we designed a detail enhanced module to enhance the contrast between central and peripheral receptive field information using the superposition of two asymmetric convolutions in different directions and a standard convolution. Then, we expanded the scale of the module into a multi-scale detail enhanced module. The difference between central and peripheral information at different scales makes the network more sensitive to changes in details, resulting in more accurate segmentation. In order to reduce the impact of redundant information on segmentation results and increase the effective receptive field, we proposed the channel multi-scale module, adapted from the Res2net module. This creates independent parallel multi-scale branches within a single residual structure, increasing the utilization of redundant information and the effective receptive field at the channel level. We conducted experiments on four different datasets, and our method outperformed the common medical image segmentation algorithms currently being used. Additionally, we carried out detailed ablation experiments to confirm the effectiveness of each module.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.