Simple SummaryReal-time and automatic detection of chickens such as laying hens and broilers is the cornerstone of precision poultry farming. For laying hens, it is more challenging under cage-free conditions comparing to caged systems. In this study, we developed a deep learning model (YOLOv5x-hens) to monitor hens’ behaviors in cage-free facilities. More than 1000 images were used to train the model and an additional 200 images were adopted to test it. The newly developed YOLOv5x-hens was tested with stable performances in detecting birds under different lighting intensities, angles, and ages over 8 weeks. According to further data analysis, the model performed efficiently in real-time detection with an overall accuracy more than 95%, which is the key step for the tracking of individual birds for evaluation of production and welfare. However, there are still some limitations of the current version of the model. Error detections were caused by highly overlapped stock, uneven light intensity, and images occluded by equipment (i.e., drinking lines and feeders). Future research is needed to address those issues for a higher detection rate. The current study provides technical basis for developing a machine vision system for tracking individual birds for evaluation of animals’ behaviors and welfare status in commercial cage-free houses.Real-time and automatic detection of chickens (e.g., laying hens and broilers) is the cornerstone of precision poultry farming based on image recognition. However, such identification becomes more challenging under cage-free conditions comparing to caged hens. In this study, we developed a deep learning model (YOLOv5x-hens) based on YOLOv5, an advanced convolutional neural network (CNN), to monitor hens’ behaviors in cage-free facilities. More than 1000 images were used to train the model and an additional 200 images were adopted to test it. One-way ANOVA and Tukey HSD analyses were conducted using JMP software (JMP Pro 16 for Mac, SAS Institute, Cary, North Caronia) to determine whether there are significant differences between the predicted number of hens and the actual number of hens under various situations (i.e., age, light intensity, and observational angles). The difference was considered significant at p < 0.05. Our results show that the evaluation metrics (Precision, Recall, F1 and mAP@0.5) of the YOLOv5x-hens model were 0.96, 0.96, 0.96 and 0.95, respectively, in detecting hens on the litter floor. The newly developed YOLOv5x-hens was tested with stable performances in detecting birds under different lighting intensities, angles, and ages over 8 weeks (i.e., birds were 8–16 weeks old). For instance, the model was tested with 95% accuracy after the birds were 8 weeks old. However, younger chicks such as one-week old birds were harder to be tracked (e.g., only 25% accuracy) due to interferences of equipment such as feeders, drink lines, and perches. According to further data analysis, the model performed efficiently in real-time detection with an overall accuracy more than 95%, which is the key step for the tracking of individual birds for evaluation of production and welfare. However, there are some limitations of the current version of the model. Error detections came from highly overlapped stock, uneven light intensity, and images occluded by equipment (i.e., drinking line and feeder). Future research is needed to address those issues for a higher detection. The current study established a novel CNN deep learning model in research cage-free facilities for the detection of hens, which provides a technical basis for developing a machine vision system for tracking individual birds for evaluation of the animals’ behaviors and welfare status in commercial cage-free houses.
Read full abstract