Federated learning (FL) on edge devices has emerged as a promising approach for decentralized model training, enabling data privacy and efficiency in distributed networks. However, the complexity of these models presents significant challenges in terms of transparency and interpretability, which are critical for trust and accountability in real-world applications. This paper explores the integration of explainable AI techniques to enhance model interpretability within federated learning systems. By incorporating computational geometry, we aim to optimize model structure and decision-making processes, providing clearer insights into how models generate predictions. Additionally, we examine the role of advanced database architectures in managing the complexity of federated learning models on edge devices, ensuring efficient data handling and storage. Together, these approaches contribute to a more transparent, efficient, and scalable framework for federated learning on edge networks, addressing key challenges in both model explainability and performance optimization. This review highlights recent advancements and suggests future directions for research at the intersection of federated learning (FL), edge computing, explainability, and computational techniques.
Read full abstract