AbstractThe advancement of medical image deep learning necessitates tools that can accurately identify body regions from whole-body scans to serve as an essential pre-processing step for downstream tasks. Typically, these deep learning models rely on labeled data and supervised learning, which is labor-intensive. However, the emergence of self-supervised learning is revolutionizing the field by eliminating the need for labels. The purpose of this study was to compare neural network architectures of self-supervised models that produced a body part regression (BPR) slice score to aid in the development of anatomically localized segmentation models. VGG, ResNet, DenseNet, ConvNext, and EfficientNet BPR models were implemented in the MONAI/Pytorch framework. Landmark organs were correlated to slice scores and mean absolute error (MAE) was calculated from the predicted slice and the actual slice of various organ landmarks. Four localized DynUNet segmentation models (thorax, upper abdomen, lower abdomen, and pelvis) were developed using the BPR slice scores. Dice similarity coefficient (DSC) was compared between the localized and baseline segmentation models. The best performing BPR model was the EfficientNet architecture with an overall 3.18 MAE, compared to the VGG baseline model with a MAE of 6.29. The localized segmentation model significantly outperformed the baseline in 16 out of 20 organs with a DSC of 0.88. Enhanced neural networks like EfficientNet have a large performance increase in localizing anatomical structures in a CT compared in BPR task. Utilizing BPR slice score is shown to be effective in anatomically localized segmentation tasks with improved performance.
Read full abstract