Abstract

Traditional machine learning methods require the assumption that training and test data are drawn from the same distribution, which proves challenging in real-world applications. Moreover, deep learning models require a substantial amount of labeled data for training in classification tasks and limited samples may lead to overfitting. In many real-world scenarios, there is an insufficient supply of labeled samples within the target domain for learning. Transfer learning offers an effective solution, allowing knowledge from a source domain to be transferred to a target domain. Additionally, data augmentation enhances model generalization by increasing data samples, particularly beneficial when dealing with limited target domain data. In this paper, we synergistically enhance the model’s performance on classification tasks by integrating transfer learning techniques with a data augmentation strategy. By conducting numerous experiments across various datasets, we verified the effectiveness of our proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call