Deep neural networks have advanced significantly in the last several years and are now widely employed in numerous significant real-world applications. However, recent research has shown that deep neural networks are vulnerable to backdoor attacks. Under such attacks, attackers release backdoor models that achieve satisfactory performance on benign samples while behaving abnormally on inputs with predefined triggers. Successful backdoor attacks can have serious consequences, such as attackers using backdoor generation methods to bypass critical face recognition authentication systems. In this paper, we propose PBADT, a precise backdoor attack with dynamic trigger. Unlike existing work that uses static or random trigger masks, we design an interpretable trigger mask generation framework that places triggers at positions that have the most significant impact on the prediction results. Meanwhile, backdoor attacks are made more efficient by using forgettable events to improve the efficiency of backdoor attacks. The proposed backdoor method is extensively evaluated on three face recognition datasets, LFW, CelebA, and VGGFace, while further evaluated on two general image datasets, CIFAR-10 and GTSRB. Our approach achieves almost perfect attack performance on backdoor data.