Accurate posture detection is the foundation for analyzing animal behavior, which can promote animal welfare. With the development of computer vision, such technology has been widely used in analyzing animal behavior without physical contact. However, computer vision technology for pig posture detection often suffers from problems of missed or false detection due to complex scenarios. To solve the problem, this study proposed a novel object detection model YOLOv5DA, which was based on YOLOv5s and designed for pig posture detection from 2D camera video. Firstly, we established the annotated dataset (7220 images) including the training set (5776 images), validation set (722 images), and test set (722 images). Secondly, an object detection model YOLOv5DA based on YOLOv5s was proposed to recognize pig postures (standing, prone lying, and side lying), which incorporated Mosaic9 data augmentation, deformable convolution, and adaptive spatial feature fusion. The comparative and ablation experiments were conducted to verify the model’s effectiveness and reliability. Finally, we used YOLOv5DA to detect the posture distribution of pigs. The results revealed that the standing posture was more frequent in the morning and afternoon and the side-lying posture was most common at noon. This observation demonstrated that the posture of pigs is influenced by temperature variations. The study demonstrated that YOLOv5DA could accurately identify three postures of standing, prone lying, and side lying with an average precision (AP) of 99.4%, 99.1%, and 99.1%, respectively. Compared with YOLOv5s, YOLOv5DA could effectively handle occlusion while increasing the mean precision (mAP) by 1.7%. Overall, our work provided a highly accurate, effective, low-cost, and non-contact strategy of posture detection in grouped pigs, which can be used to monitor pig behavior and assist in the early prevention of disease.
Read full abstract