In federated learning, the server trains a global learning model based on gradient information shared by multiple clients; thus, protecting client data privacy. However, it has been shown that training data can be reconstructed from the shared gradients, which can result in serious privacy breaches (e.g., also known as gradient-based inversion attacks). Popular privacy-preserving methods include those that are perturbation-related, such as differential privacy. However, these methods can result in high utility loss. In this paper, we reveal that large magnitude gradients play an important role in the image reconstructing process, and thus propose two pruning based defense mechanisms (i.e., SLGP and RLGP) for different model architectures. As only very few gradients have been affected, the utility can be maintained. To demonstrate efficiency, We evaluate the impact of our mechanisms on preventing the reconstruction of input images on various model architectures and datasets using state-of-the-art attack methods. The reconstructed images obtained using the gradient processed by our method are unrecognizable while maintaining the original performance of the models.