Abstract

Linear discriminant analysis (LDA) has been widely used as the technique of feature exaction. However, LDA may be invalid to address the data from different domains. The reasons are as follows: 1) the distribution discrepancy of data may disturb the linear transformation matrix so that it cannot extract the most discriminative feature and 2) the original design of LDA does not consider the unlabeled data so that the unlabeled data cannot take part in the training process for further improving the performance of LDA. To address these problems, in this brief, we propose a novel transferable LDA (TLDA) method to extend LDA into the scenario in which the data have different probability distributions. The whole learning process of TLDA is driven by the philosophy that the data from the same subspace have a low-rank structure. The matrix rank in TLDA is the key learning criterion to conduct local and global linear transformations for restoring the low-rank structure of data from different distributions and enlarging the distances among different subspaces. In doing so, the variations of distribution discrepancy within the same subspace can be reduced, i.e., data can be aligned well and the maximally separated structure can be achieved for the data from different subspaces. A simple projected subgradient-based method is proposed to optimize the objective of TLDA, and a strict theory proof is provided to guarantee a quick convergence. The experimental evaluation on public data sets demonstrates that our TLDA can achieve better classification performance and outperform the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call