Abstract

Recently, the development of computer vision technologies has shown excellent performance in complex tasks such as behavioral recognition. Therefore, several studies propose datasets for behavior recognition, including sign language recognition. In many countries, researchers are carrying out studies to automatically recognize and interpret sign language to facilitate communication with deaf people. However, there is no dataset aiming at sign language recognition that is used in Korea yet, and research on this is insufficient. Since sign language varies from country to country, it is valuable to build a dataset for Korean sign language. Therefore, this paper aims to propose a dataset of videos of isolated signs from Korean sign language that can also be used for behavior recognition using deep learning. We present the Korean Sign Language (KSL) dataset. The dataset is composed of 77 words of Korean sign language video clips conducted by 20 deaf people. We train and evaluate this dataset in deep learning networks that have recently achieved excellent performance in the behavior recognition task. Also, we have confirmed through the deconvolution-based visualization method that the deep learning network fully understands the characteristics of the dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call