Abstract

The current industrial scenario has witnessed the application of several artificial intelligence-based technologies for mining and processing IoMT-based big data. An emerging distributed machine learning paradigm, Federated Learning (FL), has been widely applied in IoMT-based systems as a measure to overcome the issues associated with incorporating AI into such lightweight distributed computing systems while tackling privacy issues as well. However, extensive research has identified that classical FL is still prone to privacy threats due to data leakage and the chances of adversarial attacks during gradient transfer operations. Inspired by these issues, we propose a privacy-preserving framework (Fed_select) that ensures user anonymity in IoMT-based environments for big data analysis under the FL scheme. Fed_Select utilizes alternative minimization to limit gradients and participants in system training to decrease points of system vulnerability. The framework works on an edge computing-based architecture which ensures user anonymity via the employment of hybrid encryption techniques along with added benefits of load reduction at the central server. Also, a Laplacian noise-based differential privacy is employed on the shared attributes for security enhancement that adds confidentiality to the transferred data even during adversarial scenarios. Experimental results on standard datasets showcase that the change in the volume of gradients shared and the number of participants is not proportional to the variation in various system performance parameters. Specifically, an idealistic range of client and gradient-sharing fractions along with the appropriate value of noise for differential privacy implementation is determined. Additionally, we analyze the system from a security perspective as well as compare it with other schemes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call