Abstract

Virtual training has received a considerable amount of research attention in recent years due to its potential for use in a variety of applications, such as virtual military training, virtual emergency evacuation, and virtual firefighting. To provide a trainee with an interactive training environment, human action recognition methods have been introduced as a major component of virtual training simulators. Wearable motion capture suit-based human action recognition has been widely used for virtual training, although it may distract the trainee. In this paper, we present a virtual training simulator based on 360° multi-view human action recognition using multiple Kinect sensors that provides an immersive environment for the trainee without the need to wear devices. To this end, the proposed simulator contains coordinate system transformation, front-view Kinect sensor tracking, multi-skeleton fusion, skeleton normalization, orientation compensation, feature extraction, and classifier modules. Virtual military training is presented as a potential application of the proposed simulator. To train and test it, a database consisting of 25 military training actions was constructed. In the test, the proposed simulator provided an excellent, natural training environment in terms of frame-by-frame classification accuracy, action-by-action classification accuracy, and observational latency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.