Abstract

One of the leading causes of cancer death is liver cancer. The most common type of primary liver cancer in adults is Hepatocellular carcinoma (HCC), and it is also the most common cause of death among people suffering from chronic liver diseases. In clinical practices, accurate and automated liver and tumor segmentation mechanisms are highly desired to help oncologists with early diagnosis and treatment of HCC. Manual segmentation of tumor and liaisons, from abdominal computerized tomography (CT) scans, is time-consuming, requires domain expertise, and is prone to inaccuracies in many cases. Automated segmentation, on the other hand, is a challenging task due to the nature of medical images such as heterogeneous modalities and formats, the minimal labeled training data, and the high-class imbalance in the labeled data. To solve these problems, we propose a 2-dimensional network consists of two CNN models, called CU-Net. To perform double segmentation, the first model segments the liver from the CT scan of the abdominal cavity and second detects the tumor on the liver. Our model also goes one step further and provides useful information for interpretation purposes and reports on the number of tumors, size of each tumor, size of the liver, and percentage of the liver that is tumorous, which is not provided by any other liver-lesion segmentation model. We have used the Liver Tumor Segmentation (LiTS) challenge data for training and validation purposes, and with this data, we have achieved a Dice score of 0.894 for liver segmentation and 0.595 for tumor detection.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.