Abstract
This paper presents a simple and robust framework based on Neuro-Fuzzy System (NFS) for identification of human arm gestures using skeletal data from Kinect sensor. The proposed framework consists of three phases. The first phase is data collection phase, where Kinect sensor captures joint positions in 3D space. These 3D joint position are transformed into Angular representation to reduce the number of dimensions and limits the distribution of data points for each gesture thus making it easy for clustering. Second phase is the training phase, where the NFS is trained using the transformed joints data. Subtractive Clustering is used as an optimization tool to determine the optimum number of fuzzy membership functions. This optimization helps in reduction of search space for the training neural network and hence increases the speed of training. Third phase is the recognition phase, where proposed framework classifies any given arm gesture as one of the trained gestures, in real time. The presented framework is very robust and can be extended to full-body human gesture recognition with minimal changes. Proposed framework can be used in various Human Computer Interaction (HCI) and Human Robot Interaction (HRI) based applications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.