Abstract

In the context of automated analysis of eye fundus images, it is an important common fallacy that prior works achieve very high scores in segmentation of lesions, and that fallacy is fueled by some reviews reporting very high scores, and perhaps some confusion with terms. A simple analysis of the detail of the few prior works that really do segmentation reveals scores between 7% and 70% in sensitivity for 1 FPI. That is clearly sub-par with medical doctors trained to detect signs of Diabetic Retinopathy, since they can distinguish well the contours of lesions in Eye Fundus Images (EFI). Still, a full segmentation of lesions could be an important step for both visualization and further automated analysis using rigorous quantification or areas and numbers of lesions to better diagnose. I discuss what prior work really does, using evidence-based analysis, and confront with segmentation networks, comparing on the terms used by prior work to show that the best performing segmentation network outperforms those prior works. I also compare architectures to understand how the network architecture influences the results. I conclude that, with the correct architecture and tuning, the semantic segmentation network improves up to 20 percentage points over prior work in the real task of segmentation of lesions. I also conclude that the network architecture and optimizations are important factors and that there are still important limitations in current work.

Highlights

  • The segmentation network was better in terms of image-level detection of lesions for referral (Table 6), where we can see again that DeepLabV3 ranks first in HA and SE and ranks well in HE and MA when compared with the alternatives compared (Zhou et al [22], Liu et al [23], Haloi et al [15], Mane et al [24], Gondal et al [9])

  • Significant difference between IoU and sensitivity signals a significant amount of false positives. These results show that segmentation networks could still benefit from further research in future to better deal with FP and FN

  • 3) Visualizations and Conclusions from Experiments The visualizations shown in Figure 7 for IDRID and in Figure 8 for DIRET-DB1 help illustrate the capacity of the segmentation network, since we see that the lesions are reasonably well recognized in those figures, but they show that there are still many false positives

Read more

Summary

Introduction

Most “lesion detection” approaches in related work do not even find locations of lesions in the image Instead, they receive as input small squares and classify those squares as a certain lesion or as background. They receive as input small squares and classify those squares as a certain lesion or as background In other words, they do the easy work and leave out the difficult work. They do the easy work and leave out the difficult work Worse still, surveys such as [1] [2] [3] can report scores of 90% to 100% in tasks that are segmentations of lesions but are not in reality, and I look at the details of the prior work to show that. An overlap of 20% or even 50% between regions, no matter the shape of size, is a very bad tracing of contours that is accounted for as 100% correct in those works

Methods
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.