Abstract

Recent studies have revealed that deep networks can learn transferable features that generalize well to novel tasks with little or unavailable labeled data for domain adaptation. However, justifying which components of the feature representations can reason about original joint distributions using JMMD within the regime of deep architecture remains unclear. We present a new backpropagation algorithm for JMMD called the Balanced Joint Maximum Mean Discrepancy (B-JMMD) to further reduce the domain discrepancy. B-JMMD achieves the effect of balanced distribution adaptation for deep network architecture, and can be treated as an improved version of JMMD’s backpropagation algorithm. The proposed method leverages the importance of marginal and conditional distributions behind multiple domain-specific layers across domains adaptively to get a good match for the joint distributions in a second-order reproducing kernel Hilbert space. The learning of the proposed method can be performed technically by a special form of stochastic gradient descent, in which the gradient is computed by backpropagation with a strategy of balanced distribution adaptation. Theoretical analysis shows that the proposed B-JMMD is superior to JMMD method. Experiments confirm that our method yields state-of-the-art results with standard datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.