Abstract

This paper considers an input method for the wearable computer environment using a wearable video camera. An input system is proposed in which the user performs the motion of writing an alphanumeric character in the air, for input to the computer. In the proposed method, the user's hand motion is imaged by a wearable video camera, and letter input is performed by analysis of the monochrome gray-level picture in the computer. When a letter is written in the air, it is difficult to identify the start and end of the letter input by the user, or the start and end points of the segments composing the letter. Furthermore, in a wearable computer environment it is desirable that the hand motion be detected in the background, which is continually changing both in the daytime and at night. Consequently, this paper proposes the following three procedures: (1) extraction of the user's hand region in the picture frame by visible ray or infrared illumination; (2) determination of the center of gravity of the user's hand motion in the air, using the intensity difference between picture frames as a clue; (3) continuous DP matching to identify the letter. In order to solve the essential problem of letter recognition in the air, a writing format for alphanumeric characters is proposed. The proposed system is implemented on a video camera and a laptop PC, and an experiment on writing of letters in the air is performed with five subjects and 360 alphanumeric characters. A recognition rate of approximately 75% is obtained. © 2006 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 89(5): 53–64, 2006; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20239

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call