Heterogeneous domain adaptation is a challenging problem in transfer learning since samples from the source and target domains reside in different feature spaces with different feature dimensions. The key problem is how to minimize some gaps (e.g., data distribution mismatch) presented in two heterogeneous domains and produce highly discriminative representations for the target domain. In this paper, we attempt to address these challenges with the proposed incremental discriminative knowledge consistency (IDKC) method, which integrates cross-domain mapping, distribution matching, discriminative knowledge preservation, and domain-specific geometry structure consistency into a unified learning model. Specifically, we attempt to learn a domain-specific projection to project original samples into a common subspace in which the marginal distribution is well aligned and the discriminative knowledge consistency is preserved by leveraging the labeled samples from both domains. Moreover, domain-specific structure consistency is enforced to preserve the data manifold from the original space to the common feature space in each domain. Meanwhile, we further apply pseudo labeling to unlabeled target samples based on the feature correlation and retain pseudo labels with high feature correlation coefficients for the next iterative learning. Our pseudo-labeling strategy expands the number of labeled target samples in each category and thus enforces class-discriminative knowledge consistency to produce more discriminative feature representations for the target domain. Extensive experiments on several standard benchmarks for object recognition, cross-language text classification, and digit classification tasks verify the effectiveness of our method.