Abstract

This paper introduces an interactive display system guided by a human observer's gesture, facial pose, and facial expression. The Kinect depth sensor is used to detect and track an observer's skeletal joints while the RGB camera is used for detailed facial analysis. The display consists of active regions that the observer can manipulate with body gestures and secluded regions that are activated through head pose and facial expression. The observer receives realtime feedback allowing for intuitive navigation of the interface. A storefront interactive display was created and feedback was collected from over one hundred subjects. Promising results demonstrate the potential of the proposed approach for human-computer interaction applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call