Automatic mandible segmentation of CT images is an essential step to achieve an accurate preoperative prediction of an intended target in three-dimensional (3D) virtual surgical planning. Segmentation of the mandible is a challenging task due to the complexity of the mandible structure, imaging artifacts, and metal implants or dental filling materials. In recent years, utilizing convolutional neural networks (CNNs) have made significant improvements in mandible segmentation. However, aggregating data at pooling layers in addition to collecting and labeling a large volume of data for training CNNs are significant issues in medical practice. We have optimized data-efficient 3D-UCaps to achieve the advantages of both the capsule network and the CNN, for accurate mandible segmentation on volumetric CT images. A novel hybrid loss function based on a weighted combination of the focal and margin loss functions is also proposed to handle the problem of voxel class imbalance. To evaluate the performance of our proposed method, a similar experiment was conducted with the 3D-UNet. All experiments are performed on the public domain database for computational anatomy (PDDCA). The proposed method and 3D-UNet achieved an average dice coefficient of 90% and 88% on the PDDCA, respectively. The results indicate that the proposed method leads to accurate mandible segmentation and outperforms the popular 3D-UNet model. It is concluded that the proposed approach is very effective as it requires more than 50% fewer parameters than the 3D-UNet.
Read full abstract