Abstract

Behavioural scientists track animal behaviour patterns through the construction of ethograms which detail the activities of cattle over time. To achieve this, scientists currently view video footage from multiple cameras located in and around a pen, which houses the animals, to extract their location and determine their activity. This is a time consuming, laborious task, which could be automated. In this paper we extend the well-known Real-Time Compressive Tracking algorithm to automatically determine the location of dairy and beef cows from multiple video cameras in the pen. Several optimisations are introduced to improve algorithm accuracy. An automatic approach for updating the bounding box which discourages the algorithm from learning the background is presented. We also dynamically weight the location estimates from multiple cameras using boosting to avoid errors introduced by occlusion and by the tracked animal moving in and out of the field of view.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call