Abstract

Background and Objective: Optical coherence tomography (OCT) is a useful technique to monitor retinal layer state both in humans and animal models. Automated OCT analysis in rats is of great relevance to study possible toxic effect of drugs and other treatments before human trials. In this paper, two different approaches to detect the most significant retinal layers in a rat OCT image are presented. Methods: One approach is based on a combination of local horizontal intensity profiles along with a new proposed variant of watershed transformation and the other is built upon an encoder-decoder convolutional network architecture. Results: After a wide validation, an averaged absolute distance error of 3.77 ± 2.59 and 1.90 ± 0.91 µm is achieved by both approaches, respectively, on a batch of the rat OCT database. After a second test of the deep-learning-based method using an unseen batch of the database, an averaged absolute distance error of 2.67 ± 1.25 µm is obtained. The rat OCT database used in this paper is made publicly available to facilitate further comparisons. Conclusions: Based on the obtained results, it was demonstrated the competitiveness of the first approach since outperforms the commercial Insight image segmentation software (Phoenix Research Labs) as well as its utility to generate labelled images for validation purposes speeding significantly up the ground truth generation process. Regarding the second approach, the deep-learning-based method improves the results achieved by the more conventional method and also by other state-of-the-art techniques. In addition, it was verified that the results of the proposed network can be generalized to new rat OCT images.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.