Abstract
Most approaches to recognize human activities rely on pattern recognition techniques that are trained once at design time, and then remain unchanged during usage. This reflects the assumption that the mapping between sensor signal patterns and activity classes is known at design-time. This cannot be guaranteed in mobile and pervasive computing, where unpredictable changes can often occur in open-ended environments. Run-time adaptation can address these issues. We introduce and formalize a data processing architecture extending current approaches that allows for a wide range of realizations of adaptive activity recognition systems. The adaptive activity recognition chain (adARC) includes self-monitoring, adaptation strategies and external feedback as components of the now closed-loop recognition system. We show an adARC capable of unsupervised self-adaptation to run-time changing class distributions. It improves activity recognition accuracy when sensors suffer from on-body displacement. We show an adARC capable of adaptation to changing sensor setups. It allows for scalability by enabling a recognition systems to autonomously exploit newly introduced sensors. We discuss other adaptive recognition systems within the adARC architecture. The results outline that this architecture frames a useful solution space for the real-world deployment of adaptive activity recognition systems. It allows to present and compare recognition systems in a coherent and modular manner. We discuss the challenges and new research directions resulting from this new perspective on adaptive activity recognition.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have