Abstract

The domain-invariant constraint is usually exerted on a single layer of a convolutional neural network (CNN). This scheme may help the constrained layer learn domain-invariant features at the cost of sacrificing the other layers’ learning ability. In this paper, we propose a novel method called multi-layer adversarial domain adaptation (MLADA), which incorporates information of all the layers by a hierarchical scheme. To the convolutional layers of the CNN model, a feature-level domain classifier is introduced to learn domain-invariant representation. To the fully connected layer of the CNN model, a prediction-level domain classifier is set to reduce domain discrepancy in the decision layer. An union domain classifier is then mounted to balance the joint distribution constraints between the feature-level and the prediction-level domain classifiers. So the MLADA can gain domain-invariant representation through sufficient training over all layers of the model. Experimental results indicate that MLADA outperforms other methods on multiple classic tasks in domain adaptation. The ablation study proves the necessity of three domain classifiers in the architecture of MLADA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call