Auto-segmentation is one of the critical and foundational steps for medical image analysis. The quality of auto-segmentation techniques influences the efficiency of precision radiology and radiation oncology since high- quality auto-segmentations usually require limited manual correction. Segmentation metrics are necessary and important to evaluate auto-segmentation results and guide the development of auto-segmentation techniques. Currently widely applied segmentation metrics usually compare the auto-segmentation with the ground truth in terms of the overlapping area (e.g., Dice Coefficient (DC)) or the distance between boundaries (e.g., Hausdorff Distance (HD)). However, these metrics may not well indicate the manual mending effort required when observing the auto-segmentation results in clinical practice. In this article, we study different segmentation metrics to explore the appropriate way of evaluating auto-segmentations with clinical demands. The mending time for correcting auto-segmentations by experts is recorded to indicate the required mending effort. Five well-defined metrics, the overlapping area-based metric DC, the segmentation boundary distance-based metric HD, the segmentation boundary length-based metrics surface DC (surDC) and added path length (APL), and a newly proposed hybrid metric Mendability Index (MI) are discussed in the correlation analysis experiment and regression experiment. In addition to these explicitly defined metrics, we also preliminarily explore the feasibility of using deep learning models to predict the mending effort, which takes segmentation masks and the original images as the input. Experiments are conducted using datasets of 7 objects from three different institutions, which contain the original computed tomography (CT) images, the ground truth segmentations, the auto-segmentations, the corrected segmentations, and the recorded mending time. According to the correlation analysis and regression experiments for the five well-defined metrics, the variety of MI shows the best performance to indicate the mending effort for sparse objects, while the variety of HD works best when assessing the mending effort for non-sparse objects. Moreover, the deep learning models could well predict efforts required to mend auto-segmentations, even without the need of ground truth segmentations, demonstrating the potential of a novel and easy way to evaluate and boost auto-segmentation techniques.
Read full abstract