Cow recognition forms the foundation of smart livestock management. Current closed-set cow recognition models can exclusively identify cows within their training set and are ill-equipped to recognize other subjects in the open world. Additionally, there is a dearth of comprehensive research on open-set cow recognition. Existing open-set recognition models not only exhibit suboptimal accuracy but also impose a substantial burden on hardware resources due to their extensive parameterization. These challenges hinder the practical deployment of cow recognition models. This study introduces an innovative open-set metric learning-based recognition framework for cow-back patterns, aiming to rectify a significant limitation of individual cow recognition models, which is their inability to identify cows not included in the training set. The proposed framework combines various loss functions, metric methods, and backbone networks. This integration enables the open-set recognition of cow back pattern images, thereby facilitating the identification of cows previously unrecognized by the model. To mitigate hardware resource consumption, we have devised a novel ultra-lightweight backbone network known as LightCowsNet. This network draws inspiration from existing lightweight backbone networks such as MobileFaceNet, MobileViT, and EfficientNetV2. Its purpose is to extract features from cow back pattern images efficiently. LightCowsNet employs attention mechanisms, inverted residual structures, and depthwise separable convolutions. LightCowsNet first uses attention mechanisms, inverted residual structures, and depthwise separable convolutions to design multiple new feature extraction modules. These modules, which have different structures, are subsequently placed to extract different levels of image information, which enables complete extraction of the textural information of the cows’ back patterns. The present study reorganized the Cows2021 dataset to render it suitable for open-set recognition of cow-back patterns, with the test set comprising image pairs. The results revealed that using A-SoftMax as the loss function and Euclidean distance as the metric method enabled LightCowsNet to achieve the highest accuracy of 94.26 %, with a model weight space occupation of only 4.06 MB. Compared to MobileFaceNet, MobileViT, and EfficientNetV2, LightCowsNet achieved 6.24 %, 8.82 %, and 2.3 % greater accuracy on the test set, respectively. Its weight also decreased by 0.71 MB, 0.24 MB, and 4.3 MB, respectively. These results demonstrate that the proposed model achieved both accuracy improvement and model lightweightness, and these findings could serve as reference solutions for intelligent farming on cow farms.
Read full abstract