Vision transformers have recently been adapted for object tracking and achieved promising performances owing to global correlation modeling using a self-attention mechanism. However, self-attention in existing trackers pays equal attention to the foreground and background, leading to a limited discriminative ability because attention is not target-aware. Existing solutions suffer from issues associated with information loss and the introduction of additional noise. This study proposes a Transformer-based Siamese tracking architecture integrated with deformable attention called TATrack. The TATrack can focus on the most relevant information about the target in the search region by adaptively selecting the positions of the key and value pairs, thereby reducing the information loss and additional noise. Experiments demonstrate that TATrack outperforms state-of-the-art models by a significant margin on GOT-10k, TrackingNet, LaSOT, and OTB100, with comparable processing speeds. The source code and pretrained models are available at www.github.com/Kevoen/TATrack.