Federated learning (FL) has emerged as a powerful technology widely applied in Internet of Things (IoT). Recently, researchers have shown an increased interest in privacy-preserving FL with <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">unreliable users</i> . The goal of such works is to achieve private training under ciphertext mode while ensuring that the FL model is mainly derived from the contributions of users with high-quality data. However, the existing work is still in its infancy, and the main challenge faced by many researchers is how to achieve their schemes for meeting the demands of high accuracy and efficiency. To combat that, we propose an efficient privacy-preserving FL (EPPFL) scheme with <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">unreliable users</i> . Specifically, we design a novel scheme to mitigate the negative impact of <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">unreliable users</i> , where the targeted model is guaranteed to be updated with high-quality data. Through iteratively executing our “Excluding Irrelevant Components” and “Weighted Aggregation,” the FL model converges rapidly while taking limited communication and computation overhead. As a result, not only the model accuracy can be optimized, but also the training efficiency can be improved. Meanwhile, we conduct a secure framework based on the threshold Paillier cryptosystem, which can rigorously protect all user-related private information during the training process. Furthermore, the extensive experiments demonstrate our EPPFL with high-level performance in terms of accuracy and efficiency.