Abstract

Sensor-based human activity recognition (HAR) is a common application in the fields of mobile computing and pattern recognition. Existing approaches and models of HAR can present ideal recognition performance only in well-designed, specific, and deterministic scenarios. However, in real scenes, new types of activities, new human bodies that performs activities, and other new situations are encountered. When new situations arise, it is difficult to collect sufficient and high-quality data in time. Thus, the existing approaches and models suffer from a lack of interoperability and scalability. To address these challenges, this study proposes a Model-agnostic Meta Learnings(MAML)-Coarse-Fine Convolutional Neural Networks(CFCNN) mixed task strategy to achieve fast adaptation to the human activity recognition under new situations. This is a novel method that incorporates shot learning to recognize tasks in situations where several kinds of new scenarios exist. First, some modifications were carried out on traditional MAML for multi-scale feature extraction, and a mixed task strategy was adopted during training. The proposed method improves the generalization ability of the model, and is capable of learning quickly when dealing with diffirent new tasks. Finally, the model was compared with other models when facing two new scenes of a new activity category and a new human body. The results showed that the accuracy of the proposed MAML-CFCNN based on the mixed task strategy is higher than that of the state-of-the-art methods, including MAML, First-Order MAML (FOMAML), REPTILE, and so on.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.