Abstract

In current decades, employing clinical imagery analysis to automatically segregate exudates from color fundus pictures has proven to be a difficult endeavor. This paper compares the efficacy of several picture delineation techniques using a Raspberry Pi chip. By employing various techniques to regular publically accessible samples, the optimum delineation methodology is selected, while efficacy is measured using characteristics such as resemblance factors, implementation duration, sensitivity, as well as specificity. The source hue ocular pictures are initially obtained using publicly available resources. Gaussian distortion, impulse distortion, and speckle distortion could all be present in such pictures. As a result, a pre-processing approach is used to the source pictures in effort to reduce the distortion and boost brightness. After that, several delineation methods such as a thresholding technique, mean-shift algorithm, watershed algorithm, distance transform, K-means clustering, Fuzzy C-Means grouping approach and Active Contour Model are used to segment the normal and abnormal region in color fundus images. The Fuzzy C-Means grouping approach yields higher delineation precision yet requires longer execution time, according to the findings.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call