Abstract
In spite of extensive research in the last decade, activity recognition still faces many challenges for real-world applications. On one hand, when attempting to recognize various activities, different sensors play different on different activity classes. This heterogeneity raises the necessity of learning the optimal combination of sensor modalities for each activity. On the other hand, users may consistently or occasionally annotate activities. To boost recognition accuracy, we need to incorporate the user input and incrementally adjust the model. To tackle these challenges, we propose an adaptive activity recognition with dynamic heterogeneous sensor fusion framework.We dynamically fuse various modalities to characterize different activities. The model is consistently updated upon arrival of newly labeled data. To evaluate the effectiveness of the proposed framework, we incorporate it into popular feature transformation algorithms, e.g., Linear Discriminant Analysis, Marginal Fisher's Analysis, and Maximum Mutual Information in the proposed framework. Finally, we carry out experiments on a real-world dataset collected over two weeks. The result demonstrates the practical implication of our framework and its advantage over existing approaches.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have