Abstract

Automatic liver and tumor segmentation are essential steps to take decisive action in hepatic disease detection, deciding therapeutic planning, and post-treatment assessment. The computed tomography (CT) scan has become the choice of medical experts to diagnose hepatic anomalies. However, due to advancements in CT image acquisition protocol, CT scan data is growing and manual delineation of the liver and tumor from the CT volume becomes cumbersome and tedious for medical experts. Thus, the outcome becomes highly reliant on the operator's proficiency. Further, automatic liver and tumor segmentation from CT images is challenging due to complicated parenchyma, highly variable shape, and fewer voxel intensity variation among the liver, tumor, neighbouring organs, and discontinuity in liver boundaries.Recently deep learning (DL) exhibited extraordinary potential in medical image interpretation. Because of its effectiveness in performance advancement, the DL-based convolutional neural networks (CNN) gained significant interest in the medical realm. The proposed HFRU-Net is derived from the UNet architecture by modifying the skip pathways using local feature reconstruction and feature fusion mechanism that represents the detailed contextual information in the high-level features. Further, the fused features are adaptively recalibrated by learning the channel-wise interdependencies to acquire the prominent details of the modified high-level features using the squeeze-and-Excitation network (SENet). Also, in the bottleneck layer, we employed the atrous spatial pyramid pooling (ASPP) module to represent the multiscale features with dissimilar receptive fields to represent the rich spatial information in the low-level features. These amendments uplift the segmentation performance and reduce the computational complexity of the model than outperforming methods. The efficacy of the proposed model is proved by widespread experimentation on two datasets available publicly (LiTS and 3DIrcadb). The experimental result analysis illustrates that the proposed model has attained a dice similarity coefficient of 0.966 and 0.972 for liver segmentation and 0.771 and 0.776 for liver tumor segmentation on LiTS and the 3DIRCADb dataset. Further, the robustness of the HFRU-Net is confirmed on the independent LiTS challenge test dataset. The proposed model attained the global dice of 95.0% for liver segmentation and 61.4% for tumor segmentation which is comparable with the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call