Recent years have witnessed the success of Few-shot Learning (FSL) methods in equipment reliability enhancement and fault diagnosis, by virtue of learning from limited data and adapting to new operating conditions. However, due to sensor bias, manual collection, and mislabeling, label noise is inevitably introduced into the dataset, which further reduces the quality of supervised information contained in the few-shot dataset, posing significant challenges for accurate fault diagnosis. In this paper, the problem of Few-shot Fault Diagnosis with Noisy Labels (FFDNL) is studied for the first time, and a novel method named Enhanced Transformer with Asymmetric Loss Function (ETALF) is proposed. ETALF leverages the self-attention mechanism of the transformer to dynamically measure the similarity between fault samples in the support set to enhance the model's robustness against label noise, then naturally aggregates the similar samples into corresponding correct prototypes. Furthermore, an asymmetric loss function is designed, which adaptively assigns the model with larger penalties for incorrect category predictions and smaller penalties for correct category predictions, thereby enhancing fault diagnostic performance through inherent asymmetry. Comprehensive experiments are conducted on two benchmark datasets, and the compared results with representative approaches validate the effectiveness of our proposed ETALF in performing intelligent fault diagnosis using limited and noise-labeled data under varying working conditions, which achieves accuracies of 97.77% and 95.78% with 0.2 noisy-level labels during meta-training and meta-testing on the CWRU and KAIST datasets, respectively.