Abstract

The proliferation of sensor devices in recent years has made Human Activity Recognition (HAR) a hotspot for academic interest. Smartphones with built-in sensors make it possible to employ such sensors for activity detection tasks like assisting the elderly with their everyday tasks. A plethora of high-tech sensors, such as global positioning systems (GPS), cameras, accelerometers, microphones, light sensors, and compass, are standard on these smartphones. Activity recognition is a promising field of study that might be used to provide consumers with flexible and efficient services. The purpose of our research is to evaluate a system that makes use of accelerometers, which are acceleration sensors that are built into smartphones. For the purpose of running the model and understanding six distinct human activities through supervised machine learning classification, twenty-six users' accelerometer data is collected while they go about their daily lives, including sitting, standing, lying down, walking, and climbing and descending stairs. Following the merging and aggregation of the sample data, supervised machine learning techniques were employed to generate prediction models from the instances. To get beyond the limitations of the lab, we used the Google Android platform and the Physics Toolbox Sensor Suite to gather this time series data. In this work, we take a look at how Machine Learning and Deep Nets have been used to address issues with human activity identification using smartphone sensors. Furthermore, we proved that data collected with lower frequencies can still serve their intended purpose. Our most basic Deep Nets model yielded a maximum accuracy of 95.71% for the male dataset and 94.62 % for the female dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call