Abstract

One of the aims of the 4th industrial revolution is to seamlessly connect equipment and personnel to enable a greater level of collaboration, which in turn will result in higher operational efficiency and improved decision making strategies. In this context, this work presents a deep learning based approach for automated activity recognition of two human workers who are carrying out typical manual assembly and part handling tasks in a production floor environment. A typical smartphone is employed as a wearable sensor recording acceleration in 3 directions, total acceleration, brightness, geomagnetic field strength, orientation in space and sound wave intensity and is attached to the wrist of the two workers, who subsequently perform a large number of experiments examining 5 activities of interest. For the classification of these activities, two types of neural networks are employed. Initially, the experimental signals are fed to the variational autoencoder neural network so as to extract the appropriate features and subsequently these features are added to the last layer of a long short-term memory (LSTM) time series network, which is trained to classify the 5 activities of interest. An additional goal of the present paper is the study of long-term signals (LTS), that is time series of much longer duration than those of the 5 aforementioned activities, including enough noise due to the existence of time periods corresponding to unrelated activities. Ultimately, a new methodology is proposed combining both trained neural networks mentioned above, in order to identify the desired activities within the LTS in a real time application.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call