Abstract

In this paper, a real-time behavior recognition demo system is proposed. By utilizing the captured skeletons and depth information from multiple Kinect cameras mounted at different locations with different view points, the occluded parts of a player and the ball information in the depth channels can be compensated by another Kinect camera without occlusion situations. Besides, a machine learning process trained from the the skeletons and depth channel information from two Kinect cameras makes the the behavior recognition rate to be more than 80% in real-time usage from three of the trained behaviors, i.e. right-hand dribble, left-hand dribble, and shooting behaviors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call