Abstract
In current decades, employing clinical imagery analysis to automatically segregate exudates from color fundus pictures has proven to be a difficult endeavor. This paper compares the efficacy of several picture delineation techniques using a Raspberry Pi chip. By employing various techniques to regular publically accessible samples, the optimum delineation methodology is selected, while efficacy is measured using characteristics such as resemblance factors, implementation duration, sensitivity, as well as specificity. The source hue ocular pictures are initially obtained using publicly available resources. Gaussian distortion, impulse distortion, and speckle distortion could all be present in such pictures. As a result, a pre-processing approach is used to the source pictures in effort to reduce the distortion and boost brightness. After that, several delineation methods such as a thresholding technique, mean-shift algorithm, watershed algorithm, distance transform, K-means clustering, Fuzzy C-Means grouping approach and Active Contour Model are used to segment the normal and abnormal region in color fundus images. The Fuzzy C-Means grouping approach yields higher delineation precision yet requires longer execution time, according to the findings.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.