Abstract
The topic of human activity modeling and recognition still provides many challenges, despite receiving considerable attention. These challenges include the large number of sensors often required for accurate activity recognition, and the need for user-specific training samples. In this paper, an approach is presented for recognition of activities of daily living (ADL) using only a single camera and microphone as sensors. Scene analysis techniques are used to classify audio and video events, which are used to model a set of activities using hidden Markov models. Data was obtained through recordings of 8 participants. The events generated by scene analysis algorithms are compared to events obtained through manual annotation. In addition, several model parameter estimation techniques are compared. In a number of experiments, it is shown that if activities are fully observed these models yield a class accuracy of 97% on annotated data, and 94% on scene analysis data. Using a sliding window approach to classify activities in progress yields a class accuracy of 79% on annotated data, and 73% on scene analysis data. It is also shown that a multi-modal approach yields superior results compared to either individual modality on scene analysis data. Finally, it can be concluded the created models perform well even across participants.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.