Abstract

There has been a proliferation of smartphones, smart watches, and wearable sensors, making them ubiquitous in our daily lives. Mobile sensors have found widespread use due to their ever-decreasing cost, ease of deployment, and ability to provide continuous monitoring as opposed to sensors installed at fixed locations. Various techniques have been proposed for fall detection, gait analysis, activity monitoring, and heart rate and sleep sensing by wearable sensors and mobile phones. Compared to works that use inertial measurement unit (IMU) data or static cameras installed in the environment, there has been relatively less work using egocentric videos, meaning providing the first-person view from wearable cameras. Moreover, most of the existing studies on egocentric videos are based on only one sensor modality, namely the camera. There have been even fewer approaches that combine egocentric video data with IMU data. In this chapter, we will describe three different applications using wearable cameras together with IMU data. First, we will present an overview of a fall detection system using wearable devices, e.g., smartphones and tablets, equipped with cameras and accelerometers. Since the portable device is worn by the subject, monitoring is not limited to confined areas, and extends to wherever the subject may travel, as opposed to static sensors installed in certain rooms. Second, we will present an autonomous and robust method for counting footsteps, and tracking and calculating stride length by using both accelerometer and camera data from smartphones or Google™ glass. To provide higher precision, instead of using a preset stride length, the proposed method calculates the distance traveled with each step by using the camera data. This method is compared with the commercially available accelerometer-based step counter apps. The results show that the proposed method provides a significant increase in accuracy, and has the lowest average error rate both in number of steps taken and the distance traveled. Finally, we will provide an overview of a robust and autonomous method to detect activities with more details and context by using accelerometer and egocentric video data obtained from a smartphone.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call