Abstract
In the field of computer vision, the depth image sequence collected by depth camera is not sensitive to the interference of light, occlusion and background environment. Therefore, in recent years, it is often used to collect behavior data, from which the characteristics of bone joint points are extracted as behavior information. However, it is found that the direct use of joint coordinate information collected by depth camera for behavior recognition is easily affected by individual differences in behavior and changes in shooting distance. Considering the position information of human joints as well as the angle and length information of hidden limbs, based on skeleton data, a behavior feature description method based on the ratio of vector angle and vector mode of human structure is proposed. This method solves the above problems perfectly and obtains ideal results on the self built data set, which is suitable for simple daily behavior recognition.
Highlights
In today’s society, people’s life rhythm is fast, the work pressure is big, and the aging society is serious
In order to solve the daily monitoring problem of isolated groups in indoor small scale scene, a behavior feature description method based on vector modulus ratio and vector angle of human body structure is proposed in this paper
Its main contribution is the Actionlet Ensemble (AE) model, which can deal with the errors of bone tracking and better characterize the internal variation
Summary
In today’s society, people’s life rhythm is fast, the work pressure is big, and the aging society is serious. In the previous research on behavior recognition using joint information, the three-dimensional coordinate sequence of bone joint points is usually used to construct feature vector directly This method only emphasizes the position information of joint points, and does not consider the information of limb length and angle between limbs in human physiological structure during human activities. This method does not take the absolute position information as the behavior feature, and the position change of the behavior individual relative to Kinect will no longer affect the behavior recognition effect, so the problem (1) mentioned above can be solved. This method can be used for behavior recognition in the preset application scenarios
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have