Abstract

PurposeAdaptive optics scanning laser ophthalmoscope (AOSLO) capillary perfusion images can possess large variations in contrast, intensity, and background signal, thereby limiting the use of global or adaptive thresholding techniques for automatic segmentation. We sought to develop an automated approach to segment perfused capillaries in AOSLO images.Methods12,979 image patches were extracted from manually segmented AOSLO montages from 14 eyes and used to train a convolutional neural network (CNN) that classified pixels as capillaries, large vessels, background, or image canvas. 1764 patches were extracted from AOSLO montages of four separate subjects, and were segmented manually by two raters (ground truth) and automatically by the CNN, an Otsu's approach, and a Frangi approach. A modified Dice coefficient was created to account for slight spatial differences between the same manually and CNN-segmented capillaries.ResultsCNN capillary segmentation had an accuracy (0.94), a Dice coefficient (0.67), and a modified Dice coefficient (0.90) that were significantly higher than other automated approaches (P < 0.05). There were no significant differences in capillary density and mean segment length between manual ground-truth and CNN segmentations (P > 0.05).ConclusionsClose agreement between the CNN and manual segmentations enables robust and objective quantification of perfused capillary metrics. The developed CNN is time and computationally efficient, and distinguishes capillaries from areas containing diffuse background signal and larger underlying vessels.Translational RelevanceThis automatic segmentation algorithm greatly increases the efficiency of quantifying AOSLO capillary perfusion images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call