Abstract

Retinal optical coherence tomography (OCT) images are widely used in diagnosis of ocular conditions. However, random shift and orientation changes of the retinal layers in OCT B-scans yield to appearance variations across the scans. These variations reduce the accuracy of the algorithms applied in the analysis of OCT images. In this study, we propose a preprocessing step to compensate these variations and align B-scans. At first, by incorporating total variation (TV) loss in the well-known Unet model, we propose a TV-Unet model to accurately detect the retinal pigment epithelium (RPE) layer in each B-scan. Then we use the detected RPE layer in the alignment method to form a curvature curve and a reference line. A novel window transferring-based alignment approach is applied to force the curve points to form a straight line, while preserving the shape and size of the pathological lesions. Since detection of RPE layer is a crucial step in the proposed alignment method, we utilized various datasets to train and test the TV-Unet and provided a multimodal, device-independent OCT image alignment method. The TV-Unet localizes the RPE layer in OCT images with low boundary error (maximum of 1.94pixels) and high Dice coefficient (minimum of 0.98). Quantitative and qualitative results indicated that the proposed method can efficiently detects the RPE layer and align OCT images while preserving the structure and size of the retinal lesions (biomarkers) in the OCT scans.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.