Abstract

Federated Learning (FL) is a machine learning technique that enables to collaboratively learn valuable information across devices or sites without moving the data. In FL, the model is trained and shared across decentralized locations where data are privately owned. After local training, model updates are sent back to a central server, thus enabling access to distributed data on a large scale while maintaining privacy, security, and data access rights. Although FL is a well-studied topic, existing frameworks are still at an early stage of development. They encounter challenges with respect to scalability, data security, aggregation methodologies, data provenance, and production readiness. In this paper, we propose a novel FL framework that supports functionalities like scalable processing with respect of data, devices, sites and collaborators, monitoring services, privacy, and support for use cases. Furthermore, we integrate multi party computation (MPC) within the FL setup, preventing reverse engineering attacks. The proposed framework has been evaluated in diverse use cases both in cross-device and cross-silo settings. In the former case, in-device FL is leveraged in the context of an AI-driven internet of medical things (IoMT) environment. We demonstrate the framework suitability for a range of AI techniques while benchmarking with conventional centralized training. Furthermore, we prove the feasibility of developing a user-friendly pipeline that enables an efficient implementation of FL in diverse clinical use cases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call