In the resource-constrained federated edge learning (FEEL) systems, fragment-sharing is an efficient approach for multiple clients to cooperatively train a giant model with billions of parameters. Compared with the classical federated learning schemes where the local model is fully trained and exchanged by each client, the fragment-sharing only requires each client to optionally choose a parameter-fragment to train and share, according to its storage, computing, and networking abilities. However, when the full model is no longer delivered in fragment-sharing, the backdoor attacks hidden behind the fragments become harder to be detected, which introduces formidable challenge for the security of FEEL systems. In this paper, we firstly show that the existing fragment-sharing works suffer a lot from the backdoor attacks. Then, a Backdoor-Resilient approach, named BR-FEEL, is introduced to defend against the potential backdoor attacks. Specifically, a twin model is built by each benign client to integrate the parameter-fragments from others. A knowledge distillation process is designed on each client to transfer the clean knowledge from its twin model to local model. With the twin model and knowledge distillation process, our BR-FEEL approach makes sure that the local models of the benign clients will not be backdoored. Experiments on CIFAR-10 and GTSRB datasets with MobileNetV2 and ResNet-34 are conducted. The numerical results demonstrate the efficacy of BR-FEEL on reducing attack success rates by over 90% compared to other baselines under various attack methods.