Abstract

Transfer learning can enhance classification performance of a target domain with insufficient training data by utilizing knowledge relating to the target domain from source domain. Nowadays, it is common to see two or more source domains available for knowledge transfer, which can improve performance of learning tasks in the target domain. However, the classification performance of the target domain decreases due to mismatching of probability distribution. Recent studies have shown that deep learning can build deep structures by extracting more effective features to resist the mismatching. In this paper, we propose a new multi-source deep transfer neural network algorithm, MultiDTNN, based on convolutional neural network and multi-source transfer learning. In MultiDTNN, joint probability distribution adaptation (JPDA) is used for reducing the mismatching between source and target domains to enhance features transferability of the source domain in deep neural networks. Then, the convolutional neural network is trained by utilizing the datasets of each source and target domain to obtain a set of classifiers. Finally, the designed selection strategy selects classifier with the smallest classification error on the target domain from the set to assemble the MultiDTNN framework. The effectiveness of the proposed MultiDTNN is verified by comparing it with other state-of-the-art deep transfer learning on three datasets.

Highlights

  • In the past two decades, machine learning has dramatically progressed, and it has become a practical technology from laboratory to widespread commercial use [1]

  • Can more knowledge that is transferred into the target domain by using knowledge from multiple source domains, and the classification models of theConclusions target domain are built; Convolutional Neural Networks (CNN) extracts more complex features of the dataset; joint probability distribution adaptation (JPDA) is used to

  • Multi-source transfer can provide more knowledge that is transferred into the target domain by using knowledge from multiple source domains, and the classification models of the target domain are built; CNN extracts more complex features of the dataset; JPDA is used to reduce the difference of probability distribution between domains and increases the transferability of features in source domains

Read more

Summary

Introduction

In the past two decades, machine learning has dramatically progressed, and it has become a practical technology from laboratory to widespread commercial use [1]. In order to obtain a high accuracy classification model, many machine learning algorithms need to satisfy the following two basic conditions: (1) the training and test data come from the same feature space and the same distribution, which satisfy the independent and identical distribution conditions; (2) enough training samples are available. These assumptions are not always met in practical applications [11,12].

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call