Domain adaptation (DA) aims to reduce knowledge gap between domains and improve the prediction ability of models in the target domain. However, the representations learned from feature extraction network often contain redundant information, which is harmful to domain alignment. In addition, many methods only focus on either single-source DA task or multi-source DA task, which limits their real-world applications. In this paper, we propose a simple but effective method called Category and Domain Features Augmentation (CDFA), which consists of two components: Contrastive Classifier Network (CCN) and Domain-specific Learning Network (DSLN). CDFA can remove the specific representation at the feature extraction stage to alleviate transfer difficulty, where CCN is used to increase the probabilistic output of samples and avoid misclassification, and DSLN facilitates the separation of redundant information from all representations by learning domain-specific representations. Empirical evaluations on several cross-domain benchmarks under single-source and multi-source DA scenarios illustrate the competitive performance of CDFA with respect to the state-of-the-art.