Source-free unsupervised domain adaptation aims to adapt a source model to an unlabeled target domain without accessing the source data due to privacy considerations. Existing works mainly solve the problem by self-training methods and representation learning. However, these works typically learn the representation on a single semantic level and barely exploit the rich hierarchical semantic information to obtain clear decision boundaries, which makes it hard for these methods to achieve satisfactory generalization performance. In this paper, we propose a novel hierarchical contrastive domain adaptation algorithm that exploits self-supervised contrastive learning on both fine-grained instances and coarse-grained cluster semantics. On the one hand, we propose an adaptive prototype pseudo-labeling strategy to obtain much more reliable labels. On the other hand, we propose hierarchical contrastive representation learning on both fine-grained instance-wise level and coarse-grained cluster level to reduce the negative effect of label noise and stabilize the whole training procedure. Extensive experiments are conducted on primary unsupervised domain adaptation benchmark datasets, and the results demonstrate the effectiveness of the proposed method.