Abstract

Domain adaptation is an effective approach to improve the generalization ability of deep learning methods, which makes a deep model more stable and robust. However, these methods often suffer from a deployment problem when deep models are deployed on different types of edge devices. In this work, we propose a new channel pruning method called Domain Adaptive Channel Pruning (DACP), which is specifically designed for the unsupervised domain adaptation task, where there is considerable data distribution mismatch between the source and the target domains. We prune the channels and adjust the weights in a layer-by-layer fashion. In contrast to the existing layer-by-layer channel pruning approaches that only consider how to reconstruct the features from the next layer, our approach aims to minimize both classification error and domain distribution mismatch. Furthermore, we propose a simple but effective approach to utilize the unlabeled data in the target domain. Our comprehensive experiments on two benchmark datasets demonstrate that our newly proposed DACP method outperforms the existing channel pruning approaches under the unsupervised domain adaptation setting.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call