Abstract

Human activity recognition (AR) has begun to mature as a field, but for AR research to thrive, large, diverse, high quality, AR data sets must be publically available and AR methodology must be clearly documented and standardized. In the process of comparing our AR research to other efforts, however, we found that most AR data sets are sufficiently limited as to impact the reliability of existing research results, and that many AR research papers do not clearly document their experimental methodology and often make unrealistic assumptions. In this paper we outline problems and limitations with AR data sets and describe the methodology problems we noticed, in the hope that this will lead to the creation of improved and better documented data sets and improved AR experimental methodology. Although we cover a broad array of methodological issues, our primary focus is on an often overlooked factor, model type, which determines how AR training and test data are partitioned---and how AR models are evaluated. Our prior research indicates that personal, hybrid, and impersonal/universal models yield dramatically different performance [30], yet many research studies do not highlight or even identify this factor. We make concrete recommendations to address these issues and also describe our own publically available AR data sets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call