Abstract

In this work, we present a technique as well as a dataset for improving daily life assistive activities in a smart Internet of Things (IoT) driven environment. We propose that augmenting data from multiple sensing devices such as Microsoft Kinect and Smartwatches can significantly improve the detection performance once incorporated in the context of an IoT framework. Kinect, being the feature-wise richest input device in the IoT world, is a favorite pick of most of the researchers for detecting postural activities. However, there are certain activity classes in IoT based smart environments on which Kinect based solutions result in high misclassifications. This is due to the similarities in 3D joint position space. For such scenarios, Kinect must be augmented with additional sensor(s) to achieve the desired level of accuracy. In this work, we improve the detection of the assistive activities related to sitting posture in general and dining-related activities in particular. Our research focus is to enable a robot to understand the activities of a human at the dining table and plan the assistive tasks accordingly. This is a two-step process; in the first step, Kinect sensor data is augmented with a collection of motion sensors’ data. Then, this data is analyzed for discrimination power through cross-validation of the Hidden Markov Model (HMM). In addition, we propose a two-level security scheme consisting of key establishment and two-factor authentication for the IoT based activity recognition environment. Our experiments show that Kinect, when complemented by motion sensors’ data, reduces the confusion instances by up to 12% on average. Moreover, we demonstrate the data quality through clustering properties of the data using an unsupervised neural network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call