To mitigate the distribution difference between the source and target domains, there have been many unsupervised domain adaptation methods to achieve class-level alignment by aligning the prototypes of two domains. Since the labels of the target domain are unobserved, the target prototypes are constructed based on pseudo-labels. However, the inaccuracy of pseudo-labels may lead to biased prototypes, which introduce noise into the distribution alignment. Moreover, a shared feature extractor or two separate feature extractors are used to extract domain-invariant features. But the former is limited by the large domain gap and the latter increases the network parameters. To this end, we propose a Softmax-Based Prototype construction and Adaptation (SBPA) method, which constructs prototypes based on the softmax output of the classifier instead of ground-truth labels or pseudo-labels. SBPA performs domain-level alignment through adversarial training, and class-level alignment by aligning prototypes of the same class. In addition, SBPA contains a residual block to explicitly model the difference between the source and target domain features extracted by a shared feature extractor. We evaluate our method on four widely used datasets, and the results show that our method outperforms recent domain adaptation methods, especially on DomainNet, the hardest domain adaptation dataset by far.