The accuracy of automatic tumor segmentation in PET/computed tomography (PET/CT) images is crucial for the effective treatment and monitoring of Hodgkin lymphoma. This study aims to address the challenges faced by certain segmentation algorithms in accurately differentiating lymphoma from normal organ uptakes due to PET image resolution and tumor heterogeneity. Variants of the encoder-decoder architectures are state-of-the-art models for image segmentation. Among these kinds of architectures, U-Net is one of the most famous and predominant for medical image segmentation. In this study, we propose a fully automatic approach for Hodgkin lymphoma segmentation that combines U-Net and DenseNet architectures to reduce network loss for very small lesions, which is trained using the Tversky loss function. The hypothesis is that the fusion of these two deep learning models can improve the accuracy and robustness of Hodgkin lymphoma segmentation. A dataset with 141 samples was used to train our proposed network. Also, to test and evaluate the proposed network, we allocated two separate datasets of 20 samples. We achieved 0.759 as the mean Dice similarity coefficient with a median value of 0.767, and interquartile range (0.647-0.837). A good agreement was observed between the ground truth of test images against the predicted volume with precision and recall scores of 0.798 and 0.763, respectively. This study demonstrates that the integration of U-Net and DenseNet architectures, along with the Tversky loss function, can significantly enhance the accuracy of Hodgkin lymphoma segmentation in PET/CT images compared to similar studies.
Read full abstract