In modern industrial production, unsupervised anomaly detection methods have gained significant attention due to their ability to address the challenge posed by the scarcity of labeled anomaly samples. Among them, unsupervised anomaly detection methods based on reverse distillation (RD) have become a mainstream choice, which has attracted extensive research due to their excellent anomaly detection performance. However, there is a problem of “feature leakage” in the RD model, which may lead to non-anomalous regions being incorrectly identified as defects. To solve this problem, we propose a Normal Feature-Enhanced Reverse teacher–student Distillation (NFERD) method. Specifically, we designed and incorporated a normal feature bank (NFB) module into the basic RD network. This module stores normal features extracted by the teacher model, assisting the student model in learning normal features more efficiently, thereby addressing the problem of “feature leakage”. In addition, to effectively fuse the feature maps extracted by the student model with the feature maps in NFBs, we designed a Hybrid Attention Fusion Module (HAFM), which ensures the preservation of key information during the feature fusion process by the parallel processing of spatial and channel attention mechanisms. Through experimental verification on two publicly available datasets, i.e., MVTec and KSDD, our method outperformed the existing mainstream methods in both image-level and pixel-level anomaly detection. Specifically, we achieved an average I-AUROC score of 99.32% on MVTec and a 98.75% P-AUROC on the KSDD, showing clearer segmentation results, especially in complex scenarios. Furthermore, our method surpassed the second-best method by over 1.4% PRO on MVTec, demonstrating its effectiveness.