Abstract

Loss functions, which govern a deep learning-based optimization process, have been widely investigated to handle the class imbalanced data issue in crack segmentation. However, their performance varies according to models and datasets, making it challenging to choose the most appropriate ones. To address this issue, the paper conducts a large-scale performance comparison of twelve commonly used loss functions on four benchmark datasets. Various aspects are considered using a statistical test-based ranking scheme, which integrates accuracy, sensitivity to threshold change, and varying degrees of imbalance severity for a comprehensive comparison. The results show that most loss functions achieve relatively similar accuracy on mildly imbalanced datasets, while weighted binary cross-entropy loss, Focal loss, Dice-based loss, and compound loss functions significantly outperform others as imbalance severity increases. In general, Focal Tversky loss function exhibits excellent performance in handling the imbalanced data issue.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call