Abstract

Transfer learning is extensively utilized for automatically recognizing and filtering out empty camera trap images that lack animal presence. Current research that uses transfer learning for identifying empty images typically solely updates the fully connected layer of models, and they usually select a pre-trained source model only based on its relevance to the target task. However, they do not consider the optimization of update layer selection, nor do they investigate the effect of sample size and class number of source domain data set used to construct the source model on the performance of the transfer model. Both of these are issues worth exploring. We answered these two issues using three different datasets and the ResNext-101 model. Our experimental results showed that when using 20,000 training samples to transfer the model from the ImageNet dataset to the Snapshot Serengeti dataset, our proposed optimal update layers improved the accuracy of the transfer model from 92.9% to 95.5% (z = −7.087, p < 0.001, N = 8118) compared to the existing method of updating only the fully connected layer. A similar improvement was observed when transferring the model from ImageNet to the Lasha Mountain dataset. Additionally, our results indicated that when using 20,000 training samples to update the pre-trained model and increasing the sample size of the binary-class training dataset used to build the source model from 100,000 to 1 million, the accuracy of the transfer model improved from 90.4% to 93.5% (z = −3.869, p < 0.001, N = 8948). Similar results were obtained when constructing the source domain dataset using ten classifications. Based on these results, we drew the following conclusions: (1) using our proposed optimal update layers instead of the commonly used method of updating only the fully connected layers can significantly improve the model's performance. (2) The optimal update layers varied when the model transferred from different source domain datasets to the same target dataset. (3) The number of classes in the source domain dataset did not significantly impact the transfer model performance. However, the sample size of the source domain dataset positively correlated with the transfer model performance, and there might be a threshold effect.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call