Federated Learning (FL) is a distributed machine learning paradigm that enables multiple clients to collaboratively train a model without sharing their data, thus preserving privacy. In this approach, data is not transmitted between the client and the base station. Instead, a model is sent to each device, where it is trained locally, and then retransmitted back to the base station. Each iteration of this process is known as a communication round. However, FL faces challenges, especially when data is not independently and identically distributed (Non-IID). Non-IID data means that the data across clients can vary significantly in distribution, leading to situations where certain classes or features are overrepresented in some clients and underrepresented in others. This lack of uniformity quickly leads to biased model updates and reduced performance. To address this, previous studies have introduced trustworthiness metrics to ensure more reliable model aggregation, minimizing accuracy losses associated with Non-IID data. Another significant challenge is the high transmission load, as model updates between clients and the base station are resource-intensive. Model parameters are transmitted between clients and base stations, which can strain communication channels and slow down the entire process, especially when dealing with larger models and datasets. This communication overhead is a bottleneck that limits the scalability of FL, particularly in resource-constrained environments. Our research addresses these challenges by integrating quantization into the trustworthiness model specifically in a Non-IID scenario. Quantization reduces the precision of model parameters to minimize data transmission, resulting in more efficient communication. We applied different levels of quantization intensity to a model trained on the CIFAR-10 dataset and found that certain methods can significantly reduce transmission overhead without substantial accuracy loss. Our findings suggest that combining quantization with trustworthiness metrics can significantly enhance the efficiency and potentially improve the adoption of Federated Learning in resource-constrained environments.