In recent years, deep neural networks (DNNs) have become the de facto models for practically all visual tasks and most temporal analysis tasks due to the abundance of available labeled data and advances in computational resources. Deep transfer learning, a learning paradigm that investigates how deep neural networks can leverage knowledge from other domains, has been successful in many practical applications. Most deep transfer learning methods rely on directly available source domain data, especially instances-based and mapping-based methods. However, the source domain data required for transfer cannot be provided in some cases because of personal privacy or copyright implications. In this regard, we propose a deep representation-based transfer learning method for deep neural networks to deal with transfer scenarios without source domain data. The proposed method induces the target deep neural network to learn the similarity structure between source deep representations by minimizing the representational similarity discrepancy between the networks, thereby transferring the domain knowledge from the source network to the target network. In addition, the proposed method combines the boosting-based deep representation transfer method to more fully utilize the source domain knowledge, so that the target network learns in both the reweighted source deep representation and target deep representation to further improve the generalization ability. The proposed method is applied to image classification and time series prediction tasks and achieves better transfer performance than other similar methods, which shows that the method has good practicability and generality in practical applications.
Read full abstract