Abstract

Acoustic technologies provide a non-invasive method to generate information about cow vocalization. This study demonstrated that collar attached acoustic sensors can differentiate between cow vocalization and background sounds obtained from cows under grazing conditions. The overall accuracy of the vocalization classification model was 99.5% in the test dataset, based on a total of 709 vocalization recordings from 10 cows. Vocalization samples were obtained from 10 trial cows, with a frequency that ranged from 3 to 452 vocalizations per cow. Algorithms were also developed to differentiate between three different cow vocalization classes (Open mouth, Closed mouth, Mixed mouth (Closed mouth followed by Open mouth)), with a model accuracy of 85% in the test dataset. Most cows demonstrated all three types of vocalization, and the between-cow variability in the probability of mixed vocalization per vocalization had a standard deviation of 0.13 relative an average probability of 0.74. The duration of the 709 individual vocalizations ranged from 0.88 to 3.37 s, with an average of 1.76 s and a standard deviation of 0.36 s. There was a between-cow variation in the duration of vocalization, with a standard deviation of 0.12 ± 0.04 s (P < 0.01). The performance of the model for the duration of vocalization had a coefficient of determination of R2 = 0.84 in the test dataset. Models to predict the proportion of a vocalization that is a closed vocalization had a coefficient of determination of R2 = 0.72 in the test dataset. The proportion of a mixed vocalization that is a closed vocalization ranged from 0.04 to 0.92 and had an average of 0.38 and a standard deviation of 0.15. There was also between-cow variation in the proportion of a mixed vocalization that is a closed vocalization, with a standard deviation of 0.08 ± 0.023 (P < 0.01). The mixed vocalization had an acoustic spectral and temporal pattern that was unique to the cow that generated the vocalization, and classification models for voice recognition had an accuracy of 80% in the test dataset. A prototype spectral unmixing algorithm was also developed to use the ensemble of cow-collar acoustic recordings from each cow to assign each cow vocalization to the cow that generated the vocalization. This study demonstrated that there is significant between-cow variability in cow vocalization traits, and that these traits can be determined using cow-attached acoustic sensors to provide information on the welfare and state of the animal.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call