Abstract

AbstractThe selection of layers in the transfer learning fine‐tuning process ensures a pre‐trained model's accuracy and adaptation in a new target domain. However, the selection process is still manual and without clearly defined criteria. If the wrong layers in a neural network are selected and used, it could lead to poor accuracy and model generalization in the target domain. This paper introduces the use of Kullback–Leibler divergence on the weight correlations of the model's convolutional neural network layers. The approach identifies the positive and negative weights in the ImageNet initial weights selecting the best‐suited layers of the network depending on the correlation divergence. We experiment on four publicly available datasets and six ImageNet pre‐trained models used in past studies for results comparisons. This proposed approach method yields better accuracies than the standard fine‐tuning baselines with a margin accuracy rate of 10.8%–24%, thereby leading to better model adaptation for target transfer learning tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call