Abstract

Animal welfare research has raised concerns regarding the intensification of farm animal housing systems that offer limited opportunity for movement. Applying deep learning models to location tracking provides an opportunity for accurate and timely measurement of cow movement within the housing environment. The objective of this study is to develop an accurate and low-cost alternative to manual cow tracking of spatial use in their tie-stall by applying deep learning techniques. Twenty-four lactating Holstein cows were video recorded for a continuous 24-h period on weeks 1, 2, 3, 6, 8, and 10. Individual images showing the in-stall position of each cow were extracted from each 24-h recording at a rate of one image per minute. Three coordinates on each cow were manually annotated on the image sequences to track the location of the left hip, the right hip, and the neck. The final dataset used to validate the deep learning approach consisted of 199,100 Red-Green-Blue images with manual coordinate annotations. Leave-one-out cross-validation was used to train 5 variants of different deep residual networks. Model performance was expressed in terms of pixel error for each coordinate annotated from the validation image set. Pixel error was converted to a standard measure in cm using the average pixel/cm ratio for each cow in each week. The average error from all 3 coordinates in the best model was equivalent to a 1.44 cm error for Resnet-18 in the actual physical placement of the coordinates within the stall environment. Based on this high degree of accuracy, this model could be used to analyze the activity patterns of individual cows for optimization of stall spaces and improved ease of movement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call