Abstract

Monitoring animal location can be a valuable tool in research and for practical applications, such as health or pasture management. Although GPS is commonly used, other solutions are available, such as RFID or image analysis. Image analysis is a non-invasive technique that has been proved to be useful to monitor animal location, as well as animal behaviour. Most, if not all, applications of image analysis for the continuous monitoring of farm animals have been developed with top-view cameras in indoor conditions. In this article, we develop a framework that combines low cost time lapse cameras, machine learning, and image registration, in order to monitor the location of animals in a pasture. We tested our framework by monitoring two flocks of goats under farm-like conditions. One time lapse camera was able to monitor an area of approximately 20 m by 20 m, and several cameras were combined to monitor the entire pasture. The precision and sensitivity of this method for automatic animal detection was estimated to be 90% and 84.5%, but the results can vary with the layout of the pasture. For example, goats were hardly detectable in front of a natural hedge, which appears dark in the image. In addition, any unwanted elements in the pasture can increase the false positive detection rate. Small animals, such as kids, were also difficult to detect in some cases, as they can be smaller than the weeds. With all the tested layouts, the sensitivity varies from 70.7% to 94.8% and the precision varies from 83.8% to 95.6%. The spatial accurracy of the method was also estimated. At a distance of 10 m, the maximal accuracy is approximately 56 cm, whereas the maximal accuracy is equal to 116 cm when the animals are at a distance of 20 m from the camera. This study shows that image analysis can be an interesting alternative to GPS with comparable accuracy and significantly lower cost.

Highlights

  • Affordable sensors are widely available and offer great opportunities to farmers and researchers

  • Monitoring using computer vision means that images are captured and automatically analysed to extract the desired information

  • Spatial accuracy we present the determination of the accuracy of the animal location workflow, i.e. the combination of the projections of the centroids of the animals and the geometric transformation f, to estimate the spatial coordinates of the animals

Read more

Summary

Introduction

Affordable sensors are widely available and offer great opportunities to farmers and researchers. When the animal is in a particular area, its transponder is detected by one of the antennas and the animal is known to be at a certain distance from the antenna This affords poor spatial accuracy and it is generally better suited to monitoring the visits of animals to a particular zone, such as a watering hole or feeding point (Adrion et al, 2018). Acoustic tags are mostly used in fisheries and rely on triangulation between the tags and the receivers For all these methods, the animals must be set up with a tag, which is not always desirable, as it can be time consuming to manage and set up the tags, as well as painful for the animal. Images can be captured via a simple RGB camera (Kashiha et al, 2014), a CCVT camera (Nasirahmadi et al, 2017), an infrared camera (Zhou et al, 2017), a 3D camera (Kongsro, 2014; Mortensen et al, 2016), or a depth camera (Leonard et al, 2019)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call