Abstract
Lean Management focusses on the elimination of wasteful activities in production. Whilst numerous methods such as value stream analysis or spaghetti diagrams exist to identify transport, inventory, defects, overproduction or waiting, the waste of human motion is difficult to detect. Activity recognition attempts to categorize human activities using sensor data. Human activity recognition (HAR) is already used in the consumer domain to detect human activities such as walking, climbing stairs or running. This paper presents an approach to transfer the human activity recognition methods to production in order to detect wasteful motion in production processes and to evaluate workplaces. Using sensor data from ordinary smartphones, long-term short-term memory networks (LSTM) are used to classify human activities. Additional to the LSTM-network, the paper contributes a labeled data set for supervised learning. The paper demonstrates how activity recognition can be included in learning factory training starting from the generation of training data to the analysis of the results.
Highlights
Related WorkHuman activity recognition (HAR) stands for deducing human activities based on sensor data [18]
AAbbssttrraacctt LLeeaann MMaannaaggeemmeenntt ffooccuusssseess oonn tthhee eelliimmiinnaattiioonn ooff wwaasstteeffuull aaccttiivviittiieess iinn pprroodduuccttiioonn
Human activity recognition (HAR) stands for deducing human activities based on sensor data [18]
Summary
Human activity recognition (HAR) stands for deducing human activities based on sensor data [18]. The authors demonstrated that AdaBoost was capable of improving the overall accuracy on the publicly available REALDISP (REAListic sensor DISPlacement) data set to 99.98 % [13]. The research team evaluates different locations of the sensors on the human body as well as a variety of sensors including environment sensors and devices for capturing vital signals such as heart rate or skin resistance. Their approach is based on a hierarchical classification that determines the location of the sensor before the actual activity recognition problem Following this approach, a high accuracy is attainable whilst maintaining the flexibility of alternative sensor placements and the use of smartphones or wearables [14].
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.