Abstract

The rapid development of artificial neural network techniques, especially convolutional neural networks, encouraged the researchers to adapt such techniques in the medical domain. Specifically, to provide assist tools to help the professionals in patients’ diagnosis. The main problem faced by the researchers in the medical domain is the lack of available annotated datasets which can be used to train and evaluate large and complex deep neural networks. In this paper, to assist researchers who are interested in applying deep learning techniques to aid the ophthalmologists in diagnosing eye-related diseases, we provide an optical coherence tomography dataset with collaboration with ophthalmologists from the King Abdullah University Hospital, Irbid, Jordan. This dataset consists of 21,991 OCT images distributed over seven eye diseases in addition to normal images (no disease), namely, Choroidal Neovascularisation, Full Macular Hole (Full Thickness), Partial Macular Hole, Central Serous Retinopathy, Geographic atrophy, Macular Retinal Oedema, and Vitreomacular Traction. To the best of our knowledge, this dataset is the largest of its kind, where images belong to actual patients from Jordan and the annotation was carried out by ophthalmologists. Two classification tasks were applied to this dataset; a binary classification to distinguish between images which belong to healthy eyes (normal) and images which belong to diseased eyes (abnormal). The second classification task is a multi-class classification, where the deep neural network is trained to distinguish between the seven diseases listed above in addition to the normal case. In both classification tasks, the U-Net neural network was modified and subsequently utilised. This modification adds an additional block of layers to the original U-Net model to become capable of handling classification as the original network is used for image segmentation. The results of the binary classification were equal to 84.90% and 69.50% as accuracy and quadratic weighted kappa, respectively. The results of the multi-class classification, by contrast, were equal to 63.68% and 66.06% as accuracy and quadratic weighted kappa, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call