Federated Learning (FL) enables the deployment of distributed machine learning models over the cloud and Edge Devices (EDs) while preserving the privacy of sensitive local data, such as electronic health records. However, despite FL advantages regarding security and flexibility, current constructions still suffer from some limitations. Namely, heavy computation overhead on limited resources EDs, communication overhead in uploading converged local models' parameters to a centralized server for parameters aggregation, and lack of guaranteeing the acquired knowledge preservation in the face of incremental learning over new local data sets. This paper introduces a secure and resource-friendly protocol for parameters aggregation in federated incremental learning and its applications. In this study, the central server relies on a new method for parameters aggregation called orthogonal gradient aggregation. Such a method assumes constant changes of each local data set and allows updating parameters in the orthogonal direction of previous parameters spaces. As a result, our new construction is robust against catastrophic forgetting, maintains the federated neural network accuracy, and is efficient in computation and communication overhead. Moreover, extensive experiments analysis over several significant data sets for incremental learning demonstrates our new protocol's efficiency, efficacy, and flexibility.
Read full abstract