Abstract

Lameness has a negative effect on the welfare and production of dairy cows. In previous studies of lameness detection in cows based on computer vision, cow walking videos contained considerable irrelevant information, lowering detection accuracy. In this study, we proposed a dairy cattle lameness detection method, namely, the Dimension-Reduced Spatiotemporal Network (DRSN), to reduce the impact of irrelevant information, consisting of video dimensionality reduction and deep learning algorithms. The YOLOv4 algorithm was used to detect the cow’s hooves from the video. The video was dimensionally reduced to a spatiotemporal image based on the locations of the cow’s legs according to the previous detection, retaining gait information while removing much irrelevant information. Finally, the DenseNet algorithm was used to classify the lameness degree into a locomotion score according to the spatiotemporal image. To evaluate the performance of the algorithm, videos of 456 cows were used as the dataset for testing. After comparing different target tracking and classification algorithms, the YOLOv4 object detection and DenseNet classification algorithms demonstrated the best performance, with target detection and classification accuracies of 92.39% and 98.50%, respectively. The result of video-based lameness classification was compared with our method, and the experimental results showed that our method was more accurate. The newly proposed approach can effectively remove irrelevant information and improve the detection accuracy for lameness in dairy cow walking videos. The method can be further integrated into a system for automatically detecting cow lameness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call