Abstract

We analyzed fundus images to identify whether convolutional neural networks (CNNs) can discriminate between right and left fundus images. We gathered 98,038 fundus photographs from the Gyeongsang National University Changwon Hospital, South Korea, and augmented these with the Ocular Disease Intelligent Recognition dataset. We created eight combinations of image sets to train CNNs. Class activation mapping was used to identify the discriminative image regions used by the CNNs. CNNs identified right and left fundus images with high accuracy (more than 99.3% in the Gyeongsang National University Changwon Hospital dataset and 91.1% in the Ocular Disease Intelligent Recognition dataset) regardless of whether the images were flipped horizontally. The depth and complexity of the CNN affected the accuracy (DenseNet121: 99.91%, ResNet50: 99.86%, and VGG19: 99.37%). DenseNet121 did not discriminate images composed of only left eyes (55.1%, p = 0.548). Class activation mapping identified the macula as the discriminative region used by the CNNs. Several previous studies used the flipping method to augment data in fundus photographs. However, such photographs are distinct from non-flipped images. This asymmetry could result in undesired bias in machine learning. Therefore, when developing a CNN with fundus photographs, care should be taken when applying data augmentation with flipping.

Highlights

  • We analyzed fundus images to identify whether convolutional neural networks (CNNs) can discriminate between right and left fundus images

  • We investigated whether CNNs can distinguish left and right fundus photographs even when one image is horizontally flipped

  • We used class activation ­mapping[9] (CAM) to determine which part of the fundus image was important for discriminating the photographs (Fig. 1)

Read more

Summary

Introduction

We analyzed fundus images to identify whether convolutional neural networks (CNNs) can discriminate between right and left fundus images. We created eight combinations of image sets to train CNNs. Class activation mapping was used to identify the discriminative image regions used by the CNNs. CNNs identified right and left fundus images with high accuracy (more than 99.3% in the Gyeongsang National University Changwon Hospital dataset and 91.1% in the Ocular Disease Intelligent Recognition dataset) regardless of whether the images were flipped horizontally. Class activation mapping identified the macula as the discriminative region used by the CNNs. Several previous studies used the flipping method to augment data in fundus photographs. Kernels extract and learn features from the image as it passes through This multi-layered connection in CNNs is similar to the animal visual cortex, so animal neuron activity can be predicted using ­CNNs4. The accuracy of CNNs in diagnosing ophthalmic diseases using fundus images has been evaluated in several ­studies[6] Some such studies used image-flipping to augment d­ ata[7,8]. We used class activation ­mapping[9] (CAM) to determine which part of the fundus image was important for discriminating the photographs (Fig. 1)

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.