Abstract

• Cattle can be identified with an image from any point of view. • Deep convolutional neural networks perform highly feature extraction capability. • Compact loss strengthens the separable and discriminative capacity of the features. • Models are able to identify unseen individuals without retraining. Visual cattle identification from a random view image taking in real farm environments conducts to provide an essential way for real time cattle monitoring applicable to precision livestock farming. This work utilizes multi-view images of cattle in the wild, synthetically taking advantage of the intrinsic physical characteristics of the breed to perform cattle identification with an image from any point of view. The main challenge in feature learning using Deep Convolutional Neural Networks (DCNNs) for identification tasks, such as face recognition (FR) and cattle identification, is the design of efficient loss functions that strengthen the discriminative power of the deeply learned features. Aiming at directly obtaining separable and discriminative metric for visual cattle identification under open-set protocol, the contributions of this work is as follows: firstly, based on analysis of the influence of bias in the last fully connected layer experimentally we reformulate the SoftMax as SoftMax-nB by eliminating the bias item in it to obtain more separable features on the hypersphere after normalization. Then we propose compact loss by combining SoftMax-nB and distance metric learning, including triplet loss and tight loss. The learning of DCNNs is jointly supervised to directly maximize inter-class distance and minimize intra-class variance simultaneously and thus highly enhances the discriminative power of the features. We present a series of experiments to evaluate the performance of compact loss on our MVCAID100 and OpenCows2020 datasets. It is encouraging to find that our compact loss not only outperforms the state-of-the-art models for face recognition tasks, including ArcFace and NPT Loss, on our MVCAID100 dataset, and it also surpasses SoftMax-based reciprocal triplet loss on OpenCows2020 under the same open-set protocol. MVCAID100 datasets will be available publicly right after the paper is accepted.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call