Abstract

We have developed a new method to segment and analyze retinal layers in optical coherence tomography (OCT) images with the intent of monitoring changes in thickness of retinal layers due to disease. OCT is an imaging modality that obtains cross-sectional images of the retina, which makes it possible to measure thickness of individual layers. In this paper we present a method that identifies six key layers in OCT images. OCT images present challenges to conventional edge detection algorithms, including that due to the presence of speckle noise which affects the sharpness of inter-layer boundaries significantly. We use a directional filter bank, which has a wedge shaped passband that helps reduce noise while maintaining edge sharpness, in contrast to previous methods that use Gaussian filter or median filter variants that reduce the edge sharpness resulting in poor edge-detection performance. This filter is utilized in a spatially variant setting which uses additional information from the intersecting scans. The validity of extracted edge cues is determined according to the amount of gray-level transition across the edge, strength, continuity, relative location and polarity. These cues are processed according to the retinal model that we have developed and the processing yields edge contours.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call