Abstract

Automatically classifying retinal blood vessels appearing in fundus camera imaging into arterioles and venules can be problematic due to variations between people as well as in image quality, contrast and brightness. Using the most dominant features for retinal vessel types in each image rather than predefining the set of characteristic features prior to classification may achieve better performance. In this paper, we present a novel approach to classifying retinal vessels extracted from fundus camera images which combines an Orthogonal Locality Preserving Projections for feature extraction and a Gaussian Mixture Model with Expectation-Maximization unsupervised classifier. The classification rate with 47 features (the largest dimension tested) using OLPP on our own ORCADES dataset and the publicly available DRIVE dataset was $90.56\%$ and $86.7\%$ respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call