Abstract

Human activity recognition plays a vital role in various applications like healthcare, sports, and smart environments. … With the increase in the use of smart wearables and sensors, the concern for privacy has increased drastically in recognizing the activities of individuals. The legacy techniques send the data from millions of users to a central server where the processing and model generation happens. This increases the load on the server. Federated Learning is a promising method to handle sensitive and personal data in an efficient manner as well as use distributed techniques to reduce challenges that are faced in legacy approaches. Edge devices like smartphones and embedded devices extract the raw data, preprocess it, and generate a personalized model. These models are then aggregated at a central server using federated learning methods and their aggregate model is sent back to clients. Long Short-Term Memory (LSTM) is used for model generation as it can effectively capture temporal dependencies in sequential data. In order to show a real-world scenario, an AWS EC2 instance is used as a server, and laptops and smartphones are used as clients. Our results demonstrate that such methods can be adopted and can be used on a much larger scale. The performance of the global and local models is comparable to those attained by conventional methods. The utilization of federated learning shows a significant step forward in the development of human activity recognition. This approach allows for collaborative learning across multiple devices while also ensuring the privacy of users is maintained.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call