The dangerous body movements of lactating sows are mainly reflected in the frequency and duration of posture changes. To avoid the disturbances from the light environment and illumination variations caused by heat lamp and day-night cycle, depth videos were acquired for posture change detection. In this paper, a CNN-based method was developed to automatically detect sow posture change in untrimmed depth videos. Firstly, clip-level Detect and Classify Posture R-CNN (DCP R-CNN) was proposed for sow detection and posture classification. The DCP R-CNN decoupled the detection and classification task into two branches: detection branch and classification branch so that the detection branch could focus on the task of sow localization. Secondly, a method was proposed to temporally localize the possible posture changing segments from coarse to fine. Finally, a two-stream posture change classification network, which used two CNNs to extract the complementary features from DMM and the first and the last frames, was introduced to classify the possible posture changing segments into change or non-change. At the scene of commercial piggery, the videos were acquired at 5 frames per second (fps) by installing a Microsoft Kinect v2.0 sensor above the pen. The test set consisted of two 24-hour untrimmed depth videos and six 40-minute untrimmed depth videos. The test results showed that our method could detect the posture change correctly. The success rate of DCP R-CNN was 96.38% when the IoU threshold was 0.7, and the average precision and recall rate of 4 types of posture were 97.72% and 95.23%, respectively. The possible posture change was detected with a recall rate of 92.46% and the posture change was classified with an accuracy of 97.71%. The precision of posture change detection was 88.75%, the recall rate was 90.87%. The detection speed of our method was 5.71 fps, which was faster than the video frame rate (5 fps). Our study demonstrated the feasibility of 24-hour detection method for sow posture change.
Read full abstract