Abstract

The use of human gesturing to interact with devices such as computers or smartphones has presented several problems. This form of interaction relies on gesture interaction technology such as Leap Motion from Leap Motion, Inc, which enables humans to use hand gestures to interact with a computer. The technology has excellent hand detection performance, and even allows simple games to be played using gestures. Another example is the contactless use of a smartphone to take a photograph by simply folding and opening the palm. Research on interaction with other devices via hand gestures is in progress. Similarly, studies on the creation of a hologram display from objects that actually exist are also underway. We propose a hand gesture recognition system that can control the Tabletop holographic display based on an actual object. The depth image obtained using the latest Time-of-Flight based depth camera Azure Kinect is used to obtain information about the hand and hand joints by using the deep-learning model CrossInfoNet. Using this information, we developed a real time system that defines and recognizes gestures indicating left, right, up, and down basic rotation, and zoom in, zoom out, and continuous rotation to the left and right.

Highlights

  • Gesture interaction technology that measures and analyzes the movement of the user’s body to control information devices or to link with content has been the topic of many studies [1,2,3,4,5,6,7]

  • Several cameras for gaze tracking are attached to the tabletop holographic display, which causes depth information and interferes with accurate hand detection

  • We designed a gesture interaction system that uses Azure Kinect to enable the hologram displayed on the tabletop holographic display to be controlled in real time without any equipment

Read more

Summary

Introduction

Gesture interaction technology that measures and analyzes the movement of the user’s body to control information devices or to link with content has been the topic of many studies [1,2,3,4,5,6,7].Among them, the hand is the most used and is a medium capable of various formations owing to its high degree of freedom. Azure Kinect is a Microsoft’s ToF-based depth camera released in 2019. When the camera is turned on, only the background or structures other than the user are present on the screen in the first frame that is received. The background and surrounding structures are erased using only the depth information of the image with a depth difference value exceeding a predetermined threshold, it becomes the image in which only the depth information of the user and the user’s hand exists. In a tabletop holographic display using Azure Kinect, the bottom part of the tabletop is closer than the hand. Several cameras for gaze tracking are attached to the tabletop holographic display, which causes depth information and interferes with accurate hand detection. The depth information of the background and the structure is erased using background subtraction [28]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call