Small and large farm animals are frequently identified using techniques like microchipping or tagging. However, these techniques may have a detrimental impact on animal welfare and may result in theft and fraud. Modern and sophisticated identification procedures using biometrics, including retina, face, and iris recognition, are available and have less of a detrimental effect on animal welfare. Because biometric markers are unchangeable and unaffected, it is feasible to stop situations of theft and fraud. This study uses retina images to identify animals based on each animal's different and specific retinal patterns. To the best of our knowledge, there isn't a publicly accessible dataset for images of animal retinas. The three main goals of this study are to: 1) produce a novel dataset of Cattle retinal images and make it publicly available; 2) create a successful image processing-based system for retinal identification and recognition; and 3) create a graphical user interface (GUI) that enables querying the database to ascertain whether a specific Cattle retina is present. To accomplish these goals, a dataset comprising 2430 images from the left and right eyes of 300 cattle was produced. After the dataset was created, the animal retinal images were segmented using image processing methods such as scaling, color transformations, image sharpening, contrast enhancements, noise filtering, and histogram equalization. By using the segmented images with the SURF, FAST, BRISK, and HARRIS techniques, distinctive retinal features were found. During the identification phase, the animal whose retinal characteristic matched the sought animal's retina the closest was recognized. Within the developed CattNIS system, the SURF approach has the highest accuracy rate of 92.25 percent. The results of this study show that the identification of cattle using retinal vascular patterns is highly successful. The availability of the collected cattle retinal images from this study is expected to significantly contribute to the advancement of future research in this field, both in terms of quantity and quality.
Read full abstract