Abstract Anemia, often caused by internal parasites like Haemonchus contortus, presents significant health and productivity challenges for small ruminants. The primary goal of this study was to accurately distinguish between healthy and anemic goats using an image classification system focused on eye conjunctiva images. In the initial phase, 1,200 eye conjunctiva images from 75 goats were collected at Fort Valley State University farms over a two-week period using smartphone cameras. These images were randomly divided into training (70%) and testing (30%) datasets, with each group containing three subfolders corresponding to FAMACHA scores of 1, 2, and 3. The validation folder included unique images not found in the other folders. A Convolutional Neural Network (CNN) algorithm was utilized for image analysis, incorporating data augmentation techniques such as Resize, RandomHorizontalFlip, RandomVerticalFlip, and RandomRotation. The CNN model was built on the Google Colaboratory platform using CUDA 11.2 and the PyTorch machine learning framework, incorporating three ConvNet layers. The model training used the Adam Optimizer with a slower learning rate of 0.001 and a weight decay of 0.0001 to prevent exploding gradient issues, alongside ReLU and the cross-entropy loss function over 1000 epochs. Results demonstrate that the Convolutional Neural Network (CNN) model was highly effective in classifying eye conjunctiva images of goats to detect anemia based on FAMACHA scores. The overall precision of 93.9% indicates that the model was accurate in identifying true positive cases. The recall accuracy of 92.1% suggests that the model was successful in capturing most of the true anemic cases from the entire dataset, minimizing the number of false negatives. When examining the precision for each FAMACHA score, the CNN model displayed excellent performance. With a precision of 100% for FAMACHA score 1, the model perfectly identified healthy goats without any false positives. For score 2, the model achieved a precision of 95%, indicating a high level of accuracy in detecting goats with mild anemia. Lastly, for FAMACHA score 3, the precision of the model was 92.9%, demonstrating its effectiveness in identifying goats with more severe anemia. These results show that smartphone-derived images can be a powerful tool in creating an image classification model for monitoring animal health, particularly in detecting anemia in small ruminants. Utilizing smartphone cameras makes the process more accessible, cost-effective, and user-friendly for farmers and veterinary professionals. Despite the impressive performance of the CNN model, the research suggests that there is still room for improvement. By increasing the size of the training dataset, Refining the model development process, such as adjusting the architecture, hyperparameters, or data augmentation techniques, could also contribute to enhanced performance. These improvements would further increase the accuracy and reliability of the model in identifying anemic goats, ultimately leading to better animal health management.