Target representation is crucial for visual tracking. Most Siamese-based trackers try their best to establish target models by using various deep networks. However, they neglect the exploration of correlation among features, which leads to the inability to learn more representative features. In this paper, we propose a spatial attention inference model for cascaded Siamese tracking with dynamic residual update strategy. First, a spatial attention inference model is constructed. The model fuses interlayer multi-scale features generated by dilation convolution to enhance the spatial representation ability of features. On this basis, we use self-attention to capture interaction between target and context, and use cross-attention to aggregate interdependencies between target and background. The model infers potential feature information by exploiting the correlations among features for building better appearance models. Second, a cascaded localization-aware network is introduced to bridge a gap between classification and regression. We propose an alignment-aware branch to resample and learn object-aware features from the predicted bounding boxes for obtaining localization confidence, which is used to correct the classification confidence by weighted integration. This cascaded strategy alleviates the misalignment problem between classification and regression. Finally, a dynamic residual update strategy is proposed. This strategy utilizes the Context Fusion Network (CFNet) to fuse the templates of historical and current frames to generate the optimal templates. Meanwhile, we use a dynamic threshold function to determine when to update by judging the tracking results. The strategy uses temporal context to fully explore the intrinsic properties of the target, which enhances the adaptability to changes in the target’s appearance. We conducted extensive experiments on seven tracking benchmarks, including OTB100, UAV123, TC128, VOT2016, VOT2018, GOT10k and LaSOT, to validate the effectiveness of our proposed algorithm.