Abstract

Retinal optical coherence tomography (OCT) images are widely used in diagnosis of ocular conditions. However, random shift and orientation changes of the retinal layers in OCT B-scans yield to appearance variations across the scans. These variations reduce the accuracy of the algorithms applied in the analysis of OCT images. In this study, we propose a preprocessing step to compensate these variations and align B-scans. At first, by incorporating total variation (TV) loss in the well-known Unet model, we propose a TV-Unet model to accurately detect the retinal pigment epithelium (RPE) layer in each B-scan. Then we use the detected RPE layer in the alignment method to form a curvature curve and a reference line. A novel window transferring-based alignment approach is applied to force the curve points to form a straight line, while preserving the shape and size of the pathological lesions. Since detection of RPE layer is a crucial step in the proposed alignment method, we utilized various datasets to train and test the TV-Unet and provided a multimodal, device-independent OCT image alignment method. The TV-Unet localizes the RPE layer in OCT images with low boundary error (maximum of 1.94pixels) and high Dice coefficient (minimum of 0.98). Quantitative and qualitative results indicated that the proposed method can efficiently detects the RPE layer and align OCT images while preserving the structure and size of the retinal lesions (biomarkers) in the OCT scans.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call