Abstract

Large-scale sheep farming has conventionally relied on barcodes and ear tags, devices that can be difficult to implement and maintain, for sheep identification and tracking. Biological data have also been used for tracking in recent years but have not been widely adopted due to the difficulty and high costs of data collection. To address these issues, a noncontact facial recognition technique is proposed in this study, in which training data were acquired in natural conditions using a series of video cameras, as Dupo sheep walked freely through a gate. A key frame extraction algorithm was then applied to automatically generate sheep face data sets representing various poses. An improved MobilenetV2 framework, termed Order-MobilenetV2 (O-MobilenetV2), was developed from an existing advanced convolutional neural network and used to improve the performance of feature extraction. In addition, O-MobilenetV2 includes a unique conv3x3 deep convolution module, which facilitated higher accuracy while reducing the number of required calculations by approximately two-thirds. A series of validation tests were performed in which the algorithm identified individual sheep using facial features, with the proposed model achieving the highest accuracy (95.88%) among comparable algorithms. In addition to high accuracy and low processing times, this approach does not require significant data pre-processing, which is common among other models and prohibitive for large sheep populations. This combination of simple operation, low equipment costs, and high robustness to variable sheep postures and environmental conditions makes our proposed technique a viable new strategy for sheep facial recognition and tracking.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call