Abstract

SummaryRecently, researchers have placed a great deal of emphasis on modeling activity patterns to better understand human behavior. Several approaches have been researched so far to develop automatic human activity recognition systems that infer detailed semantics from visual images, aiming to understand real human behavior patterns. However, there is still a need for a cost effective solution to distinguish human actions in the real‐world environment. With this encouragement, a novel approach is proposed to recognize shoplifting acts by examining the posture evidence of the human being. This approach begins by obtaining the two‐dimensional pose reflecting human's body joints as a skeleton from the recorded frames. Subsequently, a preprocessing step is used to preprocess skeleton data, which can handle the occlusion too. Postural feature generation is then used to extract pertinent features from such preprocessed skeletons. Finally, feature deduction is performed to downsize the derived features to a smaller dimension, and activity classification is performed on such reduced features to identify shoplifting behaviors in real time. A synthetic shoplifting dataset and real store recorded videos are used to conduct the experiments, the findings of which appear more promising than those obtained using other cutting‐edge methods, with an accuracy of 97.36% and 91.66% for synthesized and real store recorded inputs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call