Federated learning is a decentralized privacy-preserving mechanism that allows multiple clients to collaborate without exchanging their datasets. Instead, they jointly train a model by uploading their own gradients. However, recent research has shown that attackers can use clients’ gradients to reconstruct the original training data, compromising the security of federated learning. Thus, there has been an increasing number of studies aiming to protect gradients using different techniques. One common technique is secret sharing. However, it has been shown in previous research that when using secret sharing to protect gradient privacy, the original gradient cannot be reconstructed when one share is lost or a server is damaged, causing federated learning to be interrupted. In this paper, we propose an approach that involves using additive secret sharing for federated learning gradient aggregation, making it difficult for attackers to easily access clients’ original gradients. Additionally, our proposed method ensures that any server damage or loss of gradient shares are unlikely to impact the federated learning operation, within a certain probability. We also added a membership level system, allowing members of varying levels to ultimately obtain models with different accuracy levels.