Abstract

Human activity recognition evolves around classifying and analyzing workers’ actions quantitatively using convolutional neural networks on the time-series data provided by inertial measurement units and motion capture systems. However, this requires expensive training datasets since each warehouse scenario has slightly different settings and activities of interest. Here, transfer learning promises to shift the knowledge a deep learning method gained on existing reference data to new target data. We benchmark interpretable and non-interpretable transfer learning for human activity recognition on the LARa order-picking dataset with AndyLab and RealDisp as domain-related and domain-foreign reference datasets. We find that interpretable transfer learning via the recently proposed probabilistic rule stacking learner, which does not require any labeled data on the target dataset, is possible if the labels are sufficiently semantically related. The success depends on the proximity of the reference and target domains and labels. Non-interpretable transfer learning via fine-tuning can be applied even if there is a major domain-shift between the datasets and reduces the amount of labeled data required on the target dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call