Abstract

The Sussex-Huawei Locomotion-Transportation Recognition Challenge presented a unique opportunity to the activity-recognition community to test their approaches on a large, real-life benchmark dataset with activities different from those typically recognized. The goal of the challenge was to recognize, as accurately as possible, eight locomotion activities (Still, Walk, Run, Bike, Car, Bus, Train, Subway) using smartphone sensor data. This paper describes the method we developed to win this challenge, and provides an analysis of the effectiveness of its components. We used complex feature extraction and selection methods to train classical machine learning models. In addition, we trained deep learning models using a novel end-to-end architecture for deep multimodal spectro-temporal fusion. All the models were fused into an ensemble with the final predictions smoothed by a hidden Markov model to account for temporal dependencies of the activities. The presented method achieved an F1 score of 94.9% on the challenge test data. We tested different sampling frequencies, window sizes, feature types, classification models and the importance of stand-alone sensors and their fusion for the task. Finally, we present an energy-efficient smartphone implementation of the method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call