Abstract

The shape and structure of retinal layers are basic characteristics for the diagnosis of many ophthalmological diseases. Based on B-Scans of optical coherence tomography, most of retinal layer segmentation methods are composed of two-steps: classifying pixels and extracting retinal layers, in which the optimization of two independent steps decreases the accuracy. Although the methods based on deep learning are highly accurate, they require a large amount of labeled data. This paper proposes a single-step method based on transformer for retinal layer segmentation, which is trained by axial data (A-Scans), to obtain the boundary of each layer. The proposed method was evaluated on two public data sets. The first one contains eight retinal layer boundaries for diabetic macular edema, and the second one contains nine retinal layer boundaries for healthy controls and subjects with multiple sclerosis. Its absolute average distance errors are 0.99 pixels and 3.67 pixels, respectively, for the two sets, and its root mean square error is 1.29 pixels for the latter set. In addition, its accuracy is acceptable even if the training data is reduced to 0.3. The proposed method achieves state-of-the-art performance while maintaining the correct topology and requires less labeled data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call