Federated Learning (FL) is a distributed Deep Learning (DL) technique that creates a global model through the local training of multiple edge devices. It uses a central server for model communication and the aggregation of post-trained models. The central server orchestrates the training process by sending each participating device an initial or pre-trained model for training. To achieve the learning objective, focused updates from edge devices are sent back to the central server for aggregation. While such an architecture and information flows can support the preservation of the privacy of participating device data, the strong dependence on the central server is a significant drawback of this framework. Having a central server could potentially lead to a single point of failure. Further, a malicious server may be able to successfully reconstruct the original data, which could impact on trust, transparency, fairness, privacy, and security. Decentralizing the FL process can successfully address these issues. Integrating a decentralized protocol such as Blockchain technology into Federated Learning techniques will help to address these issues and ensure secure aggregation. This paper proposes a Blockchain-based secure aggregation strategy for FL. Blockchain is implemented as a channel of communication between the central server and edge devices. It provides a mechanism of masking device local data for secure aggregation to prevent compromise and reconstruction of the training data by a malicious server. It enhances the scalability of the system, eliminates the threat of a single point of failure of the central server, reduces vulnerability in the system, ensures security, and transparent communication. Furthermore, our framework utilizes a fault-tolerant server to assist in handling dropouts and stragglers which can occur in federated environments. To reduce the training time, we synchronously implemented a callback or end-process mechanism once sufficient post-trained models have been returned for aggregation (threshold accuracy achieved). This mechanism resynchronizes clients with a stale and outdated model, minimizes the wastage of resources, and increases the rate of convergence of the global model.