Homography estimation is regarded as one of the key challenges in image alignment, where the goal is to estimate the projective transformation between two images on the same plane. Unsupervised learning methods are gradually becoming popular due to their excellent performance and lack of need for labeled data. However, in regional scenes with repeated textures, there may be ambiguity in the correspondence between local features, affecting homography estimation accuracy. This paper proposes a new unsupervised deep homography method RTHEN to solve such problems. In order to effectively obtain repeated texture features, a multi-scale Feature pyramid Siamese network (FPSN) is designed. Specifically, we dynamically allocate the weights of recited texture features through a dynamic attention module and introduce a channel attention module to provide rich contextual information for repeated texture areas. We propose a hard triplet loss function based on overlap constraints to optimize the matching results. At the same time, we collected a repetitive texture image dataset (RTID) for homography estimation training and evaluation. Experimental results show that our method outperforms existing learning methods in repetitive texture scenes and offers competitive performance with state-of-the-art traditional methods.