Abstract

This paper describes one of the assistance methods for annotation tasks of sign language words using binary action segmentation. The binary action segmentation divides a sign video into binary units, which correspond to during sign and static posture. At this time, the user's annotation tasks can be reduced from the full-manual work to inputting labels and correction of the segmented units. The proposed binary action segmentation is composed of Support Vector Machine and Graphcuts. The trained Support Vector Machine classifies each frame into Motion or Pause, and Graphcuts refines the initial segmentation. We evaluated the proposed method with a Japanese sign language words database. The database includes 92 Japanese sign language words which are signed by ten native signers. The total number of videos is 4,590, and 3,800 videos of 76 words except for recording and sign errors are used for the evaluation. The proposed method achieves comparable result with a smaller amount of training data than the previous method. Moreover, the work reduction ratios of annotation tasks using an annotation interface were 26:17%, 26:34%, and 17:88% for the sets whose the numbers of segmented units were 2, 3, and 4, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call