Abstract

With the development of cameras and sensors, and the spread of cloud computing, life logs can be acquired and stored in general households for various services using the logs. However, it is difficult to analyze moving images acquired by a home sensor in real time using machine learning because the data size and the computational complexity are large. New computing paradigm called edge computing or fog computing, which enables distributed computing over edge and cloud, has the possibility to address this issue. The feature vectors are extracted from moving images by preprocessing on the sensor side and the only small feature vectors are sent to the cloud and used for learning. But, it is not clear how accurately we can recognize actions using only the feature vectors in the learning and inferring. We investigate the accuracies of action recognition with various machine learning methods using feature vector information obtained from moving images. We use the pose estimation library OpenPose for detection of the feature vectors and recognize actions using logistic regression, random forest, support vector machine, and neural network (NN) models, general NN and LSTM, as machine learning methods. The experimental results show that it is possible to recognize an action with 80% accuracy or higher when using random forest and neural network models. We also discuss a method to further improve the accuracy based on the experimental results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call