Abstract

Most segmentation losses are arguably variants of the Cross-Entropy (CE) or Dice losses. On the surface, these two categories of losses (i.e., distribution based vs. geometry based) seem unrelated, and there is no clear consensus as to which category is a better choice, with varying performances for each across different benchmarks and applications. Furthermore, it is widely argued within the medical-imaging community that Dice and CE are complementary, which has motivated the use of compound CE-Dice losses. In this work, we provide a theoretical analysis, which shows that CE and Dice share a much deeper connection than previously thought. First, we show that, from a constrained-optimization perspective, they both decompose into two components, i.e., a similar ground-truth matching term, which pushes the predicted foreground regions towards the ground-truth, and a region-size penalty term imposing different biases on the size (or proportion) of the predicted regions. Then, we provide bound relationships and an information-theoretic analysis, which uncover hidden region-size biases: Dice has an intrinsic bias towards specific extremely imbalanced solutions, whereas CE implicitly encourages the ground-truth region proportions. Our theoretical results explain the wide experimental evidence in the medical-imaging literature, whereby Dice losses bring improvements for imbalanced segmentation. It also explains why CE dominates natural-image problems with diverse class proportions, in which case Dice might have difficulty adapting to different region-size distributions. Based on our theoretical analysis, we propose a principled and simple solution, which enables to control explicitly the region-size bias. The proposed method integrates CE with explicit terms based on L1 or the KL divergence, which encourage segmenting region proportions to match target class proportions, thereby mitigating class imbalance but without losing generality. Comprehensive experiments and ablation studies over different losses and applications validate our theoretical analysis, as well as the effectiveness of explicit and simple region-size terms. The code is available at https://github.com/by-liu/SegLossBias .

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.