A module selection-based approach for efficient skeleton human action recognition
A module selection-based approach for efficient skeleton human action recognition
- Research Article
10
- 10.3390/s22134755
- Jun 23, 2022
- Sensors (Basel, Switzerland)
The training of Human Activity Recognition (HAR) models requires a substantial amount of labeled data. Unfortunately, despite being trained on enormous datasets, most current models have poor performance rates when evaluated against anonymous data from new users. Furthermore, due to the limits and problems of working with human users, capturing adequate data for each new user is not feasible. This paper presents semi-supervised adversarial learning using the LSTM (Long-short term memory) approach for human activity recognition. This proposed method trains annotated and unannotated data (anonymous data) by adapting the semi-supervised learning paradigms on which adversarial learning capitalizes to improve the learning capabilities in dealing with errors that appear in the process. Moreover, it adapts to the change in human activity routine and new activities, i.e., it does not require prior understanding and historical information. Simultaneously, this method is designed as a temporal interactive model instantiation and shows the capacity to estimate heteroscedastic uncertainty owing to inherent data ambiguity. Our methodology also benefits from multiple parallel input sequential data predicting an output exploiting the synchronized LSTM. The proposed method proved to be the best state-of-the-art method with more than 98% accuracy in implementation utilizing the publicly available datasets collected from the smart home environment facilitated with heterogeneous sensors. This technique is a novel approach for high-level human activity recognition and is likely to be a broad application prospect for HAR.
- Research Article
3
- 10.4018/ijssci.311445
- Oct 14, 2022
- International Journal of Software Science and Computational Intelligence
Human activity recognition (HAR) is a crucial and challenging classification task for a range of applications from surveillance to assistance. Existing sensor-based HAR systems have limited training data availability and lack fast and accurate methods for robust and rapid activity recognition. In this paper, a novel hybrid HAR technique based on CNN, bi-directional long short-term memory, and gated recurrent units is proposed that can accurately and quickly recognize new human activities with a limited training set and high accuracy. The experiment was conducted on UCI Machine Learning Repository's MHEALTH dataset to analyze the effectiveness of the proposed method. The confusion matrix and accuracy score are utilized to gauge the performance of the presented model. Experiments indicate that the proposed hybrid approach for human activity recognition integrating CNN, bi-directional LSTM, and gated recurrent outperforms computing complexity and efficiency. The overall findings demonstrate that the proposed hybrid model performs exceptionally well, with enhanced accuracy of 94.68%.
- Research Article
6
- 10.3844/jcssp.2019.1040.1049
- Jul 1, 2019
- Journal of Computer Science
Human action recognition is a computer vision task. The evaluation of action recognition algorithms relies on the proper extraction and learning of the data. The success of the deep learning and especially learning layer by layer led to many imposing results in several contexts that include neural network. Here the Recurrent Neural Networks (RNN) with hidden unit has demonstrated advanced performance on tasks as varied as image captioning and handwriting recognition. Specifically Gated Recurrent Unit (GRU) is able to learn and take advantage of sequential and temporal data required for video recognition. Moreover video sequence can be better described on both visual and moving features. In this paper, we present our approach for human action recognition based on fusion and combination of sequential visual features and moving path. We evaluate our technique on the challenging UCF Sports Action, UCF101 and KTH dataset for human action recognition and obtain competitive results.
- Research Article
1
- 10.22399/ijcesen.329
- Jun 27, 2024
- International Journal of Computational and Experimental Science and Engineering
Human activity recognition is the process of automatically identifying and classifying human activities based on data collected from different modalities such as wearable sensors, smartphones, or similar devices having necessary sensors or cameras capturing the behavior of the individuals. In this study, XGBoost and LightGBM approaches for human activity recognition are proposed and the performance and execution times of the proposed approaches are compared. The proposed methods on a dataset including accelerometer and gyroscope data acquired using a smartphone for six activities. The activities are namely laying, sitting, standing, walking, walking downstairs, and walking upstairs. The available dataset is divided into training and test sets, and proposed methods are trained using the training set, and tested on the test sets. At the end of the study, 97.23% accuracy using the LightGBM approach, and 96.67% accuracy using XGBoost is achieved. It is also found that XGBoost is faster than the LightGBM, whenever the execution times are compared.
- Research Article
5
- 10.1007/s11042-014-2225-6
- Aug 15, 2014
- Multimedia Tools and Applications
In this paper, a stereo camera-based novel approach for Human Activity Recognition (HAR) is presented using robust 3-D human body joint features and joint-specific Hidden Markov Models (HMMs). At f...
- Conference Article
4
- 10.1109/robio.2016.7866317
- Dec 1, 2016
Human action recognition and generation for imitation learning are very important topic of the robot-human interaction research field. In this paper, we present a novel approach for human action recognition and robot action generation based on Kinect motion captured data using Hidden Markov Models (HMMs). The robot recognizes the captured human actions using HMMs, and generates the similar actions by the identical learned HMMs. Different from the traditional robot action generation methods, our system generates the robot action and its parameters only from the HMM which is learned from the recognition phase. In this paper, it is a very important point that the robot can recognize and generate action using an identical HMM, and the robot do not need to record any trajectory data for the action generation using transitional method, such as Dynamic Movement Primitives (DMPs). Since the robot action and its parameters are generated from HMMs, we can adjust the parameters to change the robot action speed. In order to improve the action accuracy, we employ an Augmented Lagrange Multiplier method (ALM) to fine-tune the trajectory of the generated action. So that, the fine-tuned action adjusts its trajectory to accurately reach the target point, and keep the similar style to the original action roughly.
- Research Article
9
- 10.1088/1757-899x/1042/1/012031
- Jan 1, 2021
- IOP Conference Series: Materials Science and Engineering
Behavior human analysis is always a significant aspect in societal communication. The human behavior analysis is developed based on few factors like human activity and action recognition. Human action recognition is an significant feature in different safety fields. The assessment of the action recognition algorithm depends on the appropriate extraction and the learning data. In the human action recognition, classification plays the major role so in order to this effectively Gated Recurrent Neural Network is used with an increased computation level. Feature extraction is one of the essential factor in human action recognition it will influence the performance and computation time of the algorithm. This paper presented an approach for human action recognition based on new mixture deep learning model. The proposed method is evaluated on the different data sets like UCF Sports, KTH and UCF101. On UCF Sports data set the proposed method has given an average of 96.8%.
- Conference Article
3
- 10.1109/sas51076.2021.9530029
- Aug 23, 2021
Recognising human activity can be advantageous in a number of different scenarios including elder care, healthcare or for training purposes. It can be of direct use to support humans in doing different activities, but is still a challenge for systems to correctly classify the activity in a way that is valuable for the user, as they often times lack the robustness or simplicity for day-to-day use. In this paper an approach for human activity recognition based on object interactions is presented. The proposed system consists of a wireless sensor network, with each sensor node measuring the received signal strength indication (RSSI) to its neighbouring nodes. The accumulated RSSI data is then analyzed by a machine learning algorithm which tries to infer one of several cooked dishes from that data. Experimental studies demonstrate promising results and therefore potential for this technology for recognising human activity in the form of cooking, but its generalised approach makes it suitable for other environments, too.
- Conference Article
2
- 10.1109/iccic.2013.6724198
- Dec 1, 2013
There has been an increased interest in recognition applications of human motion building skeleton models on the recorded video images. Although various methods have been proposed for recognition of human activities obtaining different data from realistic videos, the dependencies and relations among human motions have not been much investigated. We have proposed an approach for efficient human action recognition using relations between motion data taken joint data positions from skeleton sequences in this paper. Firstly, we have collected many action data using a sensor camera that is a practice and cheap capturing device and combined with a biomechanical model achieved by experimental data. Then, determining key frames on different actions we have compared human motions with key joints features for action recognition accuracy. The main contribution of this paper is efficient and suitable method for recognizing human motions with less data and biomechanical model. Experiments validate that our recognition approach, which uses three different actions performed by five different actors with tracing data on video sequences, outperforms most existing methods and the model is computationally efficient.
- Research Article
188
- 10.1109/tpami.2010.214
- Dec 10, 2010
- IEEE Transactions on Pattern Analysis and Machine Intelligence
We present a discriminative part-based approach for human action recognition from video sequences using motion features. Our model is based on the recently proposed hidden conditional random field (HCRF) for object recognition. Similarly to HCRF for object recognition, we model a human action by a flexible constellation of parts conditioned on image observations. Differently from object recognition, our model combines both large-scale global features and local patch features to distinguish various actions. Our experimental results show that our model is comparable to other state-of-the-art approaches in action recognition. In particular, our experimental results demonstrate that combining large-scale global features and local patch features performs significantly better than directly applying HCRF on local patches alone. We also propose an alternative for learning the parameters of an HCRF model in a max-margin framework. We call this method the max-margin hidden conditional random field (MMHCRF). We demonstrate that MMHCRF outperforms HCRF in human action recognition. In addition, MMHCRF can handle a much broader range of complex hidden structures arising in various problems in computer vision.
- Research Article
13
- 10.1142/s0218001417500082
- Feb 2, 2017
- International Journal of Pattern Recognition and Artificial Intelligence
In this paper, we present a new approach for human action recognition using [Formula: see text] skeleton joints recovered from RGB-D cameras. We propose a descriptor based on differences of skeleton joints. This descriptor combines two characteristics including static posture and overall dynamics that encode spatial and temporal aspects. Then, we apply the mean function on these characteristics in order to form the feature vector, used as an input to Random Forest classifier for action classification. The experimental results on both datasets: MSR Action 3D dataset and MSR Daily Activity 3D dataset demonstrate that our approach is efficient and gives promising results compared to state-of-the-art approaches.
- Research Article
38
- 10.1016/j.patrec.2016.07.021
- Aug 3, 2016
- Pattern Recognition Letters
Graph-based approach for 3D human skeletal action recognition
- Research Article
127
- 10.1016/j.jksuci.2019.09.004
- Sep 9, 2019
- Journal of King Saud University - Computer and Information Sciences
A new hybrid deep learning model for human action recognition
- Research Article
7
- 10.17485/ijst/2016/v9i5/72065
- Feb 9, 2016
- Indian Journal of Science and Technology
The objective of this review article is to study the spatio-temporal approaches for addressing the key issues such as multi-view, cluttering, jitter and occlusion in recognition of human action. Based on high-level action units, a new sparse model was developed for recognition of human action in static background. Relevant to multi-camera view, a negative space approach for identifying actions taken from different viewing angles was proposed. An approach was based on space-time quantities was proposed to acquire the changes of the action instead of camera motion. This space-time based approach has handled both cluttering and camera jitter. In static background, a sparse model presented for recognition of human action acquires the fact that actions from the same class share same units. The presented method was assessed on numerous public data sets. This method has achieved a recognition rate of 95.49% in KTH dataset and 89% in UCF datasets. Based on negative space, a region based method was offered. This approach has managed the issue of long shadows in human action recognition. The approach was assessed by most common datasets and has attained higher precision than contemporary techniques. An approach based on space-time quantities was proposed to manage cluttering. This approach achieves a recognition rate of 93.18% in KTH dataset and 81.5% in UCF dataset. To handle occlusion, a model was presented with spatial and temporal consistency. The algorithm was appraised on an outdoor dataset with background clutter and a standard indoor dataset (HumanEva-I). Results were matched with advanced pose estimation algorithms.
- Research Article
104
- 10.1016/j.patrec.2020.01.010
- Jan 13, 2020
- Pattern Recognition Letters
A multimodal approach for human activity recognition based on skeleton and RGB data
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.