Abstract

Retinal layer thickness measurement offers important information for reliable diagnosis of retinal diseases and for the evaluation of disease development and medical treatment responses. This task critically depends on the accurate edge detection of the retinal layers in OCT images. Here, we intended to search for the most suitable edge detectors for the retinal OCT image segmentation task. The three most promising edge detection algorithms were identified in the related literature: Canny edge detector, the two-pass method, and the EdgeFlow technique. The quantitative evaluation results show that the two-pass method outperforms consistently the Canny detector and the EdgeFlow technique in delineating the retinal layer boundaries in the OCT images. In addition, the mean localization deviation metrics show that the two-pass method caused the smallest edge shifting problem. These findings suggest that the two-pass method is the best among the three algorithms for detecting retinal layer boundaries. The overall better performance of Canny and two-pass methods over EdgeFlow technique implies that the OCT images contain more intensity gradient information than texture changes along the retinal layer boundaries. The results will guide our future efforts in the quantitative analysis of retinal OCT images for the effective use of OCT technologies in the field of ophthalmology.

Highlights

  • Optical coherence tomography (OCT) is the optical equivalent of ultrasonography, with the capability of capturing the depth-resolved cross-sectional images of biological tissues in vivo at near-histologic resolution [1]

  • In terms of performance evaluation metrics, good parameters give the high values of figure of merit (FOM), true positive rate (TPR), ACC, and their adjusted forms and lower values of false positive rate (FPR) and mean localization deviation (MLD)

  • Using the performance evaluation metrics (FOM, TPR, FPR, and ACC) and their adjusted versions (FOMADJ, TPRADJ, FPRADJ, and ACCADJ), we examined the three methods applied to the realistic OCT retinal images

Read more

Summary

Introduction

Optical coherence tomography (OCT) is the optical equivalent of ultrasonography, with the capability of capturing the depth-resolved cross-sectional images of biological tissues in vivo at near-histologic resolution [1]. Due to its noninvasiveness and high resolution, in combination with the characteristics of the eye and retinal anatomy, OCT has a rapid development of clinical applications in ophthalmology in recent years. Retinal layer thickness measurement relies on accurate OCT image segmentation. Literature shows that diverse types of edge detection algorithms can be employed as a key step in image segmentation. Based on the nature of the information used in their algorithms, we can classify these different

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call