Abstract

Sensor-based human activity recognition is to recognise users' current activities from a collection of sensor data in real time. This ability presents an unprecedented opportunity to many applications, and ambient assisted living (AAL) for elderly care is one of the most exciting examples. For example, from the meal preparation activities, we can derive the user's diet routine and detect any anomaly or decline in physical or cognitive condition, leading to immediate, appropriate change in their care plan. With the rapidly increasing ageing population and overstretched strains on our healthcare system, there is a rapidly growing need for industry in AAL. However, the complexity in real-world deployment is significantly challenging current sensor-based human activity recognition, including the inherent imperfect nature of sensing technologies, constant change in activity routines, and unpredictability of situations or events occurring in an environment. Such complexity can result in decreased accuracies in recognising activities over time and further a degradation of the performance of an AAL system. The state-of-the-art methodology in studying human activity recognition is cultivated from short-term lab or testbed experimentation, i.e., relying on well-annotated sensor data and assuming no change in activity models, which is no longer suitable for long-term, large-scale, real-world deployment. This creates a need for an activity recognition system capable of embedding the means of automatic adaptation to changes, i.e., lifelong learning. This talk will discuss new challenges and opportunities in lifelong learning in human activity recognition, with particular focus on transfer learning on activity labels across heterogeneous datasets.

Highlights

  • Class EvolutionMost of the current HAR approaches follow a well-established methodology [14]:. Deployment: pre-define a closed set of activities of interest, and select a range of ambient and/or wearable sensors that can potentially detect the activities

  • Asking users “what are you doing at the moment” might not guarantee a relevant answer, which is different from annotation at the training phase where we have a set of predefined labels and ask the user to select whatever applies

  • The difficulty is that we might not be able to foresee what new activities users are doing, so we cannot provide them with the predefined label set

Read more

Summary

Class Evolution

Most of the current HAR approaches follow a well-established methodology [14]:. Deployment: pre-define a closed set of activities of interest, and select a range of ambient and/or wearable sensors that can potentially detect the activities. Most of the current HAR approaches follow a well-established methodology [14]:. Deployment: pre-define a closed set of activities of interest, and select a range of ambient and/or wearable sensors that can potentially detect the activities. Model training: collect sensor data for a short period of time, annotate them with activity labels, and build a computational model to correlate sensor data with activities by defining expert knowledge or training a machine learning technique. Activity recognition: recognise current activities from real-time sensor data

Environments can change with new layouts and cohabitants
Concept Drift
General Reference Framework
Change adaptation
Change Detection
Change Annotation
Change Adaptation
Summary
Distinguish Meaningful Change from Noise
Informed Change Discovery
Opportune Moment for User Annotation
Quality of Annotations
Multiple Sources for Annotations
Ensemble Learning for Evolving and Emerging Activity Classes
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call