Activity recognition refers to the process of automatically identifying or interpreting activities of objects based on the data captured from different sensing devices. While previous research on indoor activity recognition predominantly relies on visual data like images or video recordings, we present a novel approach based on spatiotemporal trajectory data recorded by IoT based sensors. The proposed approach is tailored for indoor manufacturing applications leveraging trajectory partitioning, hierarchical clustering, and convolutional neural networks. Moreover, a vast majority of activity recognition models that have been used in different industrial settings are supervised methods requiring large manually labelled datasets to be collected. This manual annotation process is unwieldly and labour-intensive, and hence, often infeasible to deploy for practical applications. In contrast, our proposed activity recognition approach is semi-supervised meaning it can be trained with far less labelled data; significantly reducing the effort and costs associated with the manual annotation process. The proposed approach is evaluated using two indoor trajectory datasets related to different manufacturing assembly processes. Experimental results demonstrate its effectiveness for activity recognition: the classification accuracy (measured using F-score) varies between 0.81 to 0.95 and 0.88 to 0.92 across indoor trajectory datasets. A comparison with a baseline model indicates that it achieves up to a 18% improvement in classification accuracy. Furthermore, the classification results enable insights into factory floor states, aiding in decision-making for operational efficiency and resource allocation.
Read full abstract