Abstract

Human action recognition supported by highly accurate specialized systems, ambulatory systems, or wireless sensor networks has a tremendous potential in the areas of healthcare or wellbeing monitoring. Recently, several studies carried out focused on the recognition of actions using wearable inertial sensors, in which raw sensor data are used to build classification models, and in a few of them high-level representations are obtained which are directly related to anatomical characteristics of the human body. This research focuses on classifying a set of activities of daily living, such as functional mobility, and instrumental activities of daily living, such as preparing meals, performed by test subjects in their homes in naturalistic conditions. The joint angles of upper and lower limbs are estimated using information from five wearable inertial sensors placed on the body of five test subjects. A set of features related to human limb motions is extracted from the orientation signals (high-level data) and another set from the acceleration raw signals (low-level data) and both are used to build classifiers using four inference algorithms. The proposed features in this work are the number of movements and the average duration of consecutive movements. The classifiers are capable of successfully classifying the set of actions using raw data with up to 77.8% and 93.3% from high-level data. This study allowed comparing the use of two data levels to classify a set of actions performed in daily environments using an inertial sensor network.

Highlights

  • Human action recognition supported by vision-based systems, ambulatory systems, or wireless sensor networks has tremendous potential in the areas of healthcare or wellbeing monitoring.[1,2]

  • The subjects were asked to perform their daily activities at will, that is, they did not perform any specific action in any order

  • A set of features was extracted from low-level (A and magXYZ) and high-level (L) data based on the proposed action recognition method

Read more

Summary

Introduction

Human action recognition supported by vision-based systems, ambulatory systems, or wireless sensor networks has tremendous potential in the areas of healthcare or wellbeing monitoring.[1,2] It is driven by growing real-world application needs in such areas as ambient-assisted living and security surveillance.[3]. There are a number of reasons why human action recognition is a very challenging problem. Automatic recognition of human actions in naturalistic conditions, principally using wearable sensors, is still an open research problem of the field of pervasive computing.[4] Currently, action recognition helps at providing information about the behavior and habits of users that enable computing systems to assist users with their daily tasks.[5]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call