Abstract

Sensor technologies to quantify the feeding behaviour of free-grazing domesticated herbivores are required. Acoustic monitoring is a promising method, but signal processing algorithms to automatically identify and classify sound-producing jaw movements are not well developed. We present an algorithm for jaw movement identification that is designed to be as general as possible; it requires no calibration and identifies jaw movements according to key features in the time domain that are defined in relative terms. A machine-learning approach is used to separate true jaw-movement sounds from background noise and intense spurious noises. The algorithm software performance was tested in three field studies by comparing its output with that generated by aural sequencing. For cattle grazing green pasture in a low-noise environment with a Lavalier microphone positioned on the forehead, the system achieved 94% correct identification (i.e., aural events matched by software events within a tolerance of 0.2 s) and a false positive rate (i.e., software events not similarly matched by aural events) of 7%. For goats grazing green herbage in an extremely noisy environment, and with a piezoelectric microphone positioned on the horn, the system achieved 96% correct identification and 4% false positives. For sheep grazing dry pasture in an environment characterised by frequent intense noises, and with a piezoelectric microphone positioned on the horn, the system achieved 84% correct identification and 24% false positives. Very low error rates can be obtained from the software if intense extraneous noises can be avoided.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call