Abstract

In the problem of unsupervised domain adaption Extreme learning machine (ELM), the output layer parameters need to have both classification and domain adaptation functions, which often cannot be simultaneously fully utilized. In addition, traditional matching method based on data probability distribution cannot find the common subspace of source and target domains under large difference between domains. In order to alleviate the pressure of double functions of classifier parameters, the entire ELM learning process is mainly divided into two stages: feature representation and adaptive classifier learning, thus a joint feature representation and classifier learning based unsupervised domain adaption ELM model is proposed. In the feature representation stage, the source and target domain data are projected to their respective subspace while minimizing the difference in probability distribution between the two domains. In the adaptive classifier learning stage, the smooth manifold regularization term of target domain is used to improve the parameter adaptive ability. Experiments on six different types of datasets show that the proposed model has higher cross-domain classification accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call