Due to their excellent performance on aggregating global features, Transformer structures are being widely employed in deep learning-based visual object tracking algorithms, recently. Nevertheless, existing Transformer-based trackers still fail to handle occlusion problems due to drift in feature distributions. To address this issue, we introduce domain adaptation techniques into a novel object tracking framework, DATransT, including feature extraction, domain adaptive Transformer module and prediction head. The domain adaptive Transformer module consists of three weight-sharing branches with self and cross attention mechanisms: the source, the target and the source-target branches. Specifically, the source-target branch employs cross-attention to effectively align the feature distributions of the source and target branches. Meanwhile, we present a pseudo-labeling strategy to generate high-quality training samples. Extensive experiments show that DATransT obtains promising results on several popular datasets, containing LaSOT, TrackingNet, GOT-10k, NfS, OTB2015 and UAV123. Moreover, our method outperforms existing state-of-the-art trackers under full occlusions and partial occlusions.