Abstract

Extreme learning machines (ELMs), as “generalized” single hidden layer feedforward networks, have been proved to be effective and efficient for classification and regression problems. Traditional ELMs assume that the training and testing data are drawn from the same distribution, which however is often violated in real-world applications. In this paper, we propose a unified cross-domain ELM (CDELM) framework to address domain adaptation problems, in which the distributions of training data ( source domain ) and testing data ( target domain ) are different but related. CDELM not only fully leverages labeled source data and unlabeled target data simultaneously to construct an adaptive target classifier but also maintains the computational efficiency of ELMs. Specifically, CDELM adapts the source classifier to target domain by matching the projected means of both domains, and explores the structure property of target domain by using manifold regularization to make the final classifier more adaptable to target data. Based on the framework, two algorithms CDELM-M and CDELM-C are proposed, which aim at minimizing the marginal and conditional distribution distance between source and target domains, respectively. Moreover, CDELM-C can further enhance the classification accuracy by multiple iterations. Comprehensive experimental studies on artificial datasets and public text and image datasets demonstrate that both CDELM-M and CDELM-C are competitive with several state-of-the-art domain adaptation learning methods in terms of the classification accuracy and efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call