Transferring knowledge from one labeled source domain to unlabeled multi-target domains is of great challenge in unsupervised domain adaptation. Within multi-target domains, the presence of disordered discrepancy information poses a significant challenge in improving the classification recognition of the model for multiple new domains, which is the focus of this work. To this end, we propose a novel progressive target-into-source multi-target domain adaptation with representation-decoupled learning, which we call MTDA-PDT. Specifically, we develop a progressive strategy based on the degree of divergence between each target and source domain, and augment the source domain with these high-confidence target samples gradually in easy-to-hard manner. In addition, we adopt the domain shift and the matching degree to the source pre-trained model as similarity measurement of the target domains. Particularly, we construct the cross-domain representation-decoupled adaptation to decouple the feature representations into three semantic components, as well as the mutual information and mainstream sample gathering to guide domain alignment. Finally, extensive evaluations demonstrate the effectiveness of the proposed progressive decoupled target-into-source multi-target domain adaptation and its performance superiority.