Abstract

Wearable head-mounted display (HMD) smart devices are emerging as a smartphone substitute due to their ease of use and suitability for advanced applications, such as gaming and augmented reality (AR) [1–2]. Most current HMD systems suffer from: 1) a lack of rich user interfaces, 2) short battery life, and 3) heavy weight. Although current HMDs (e.g. Google Glass) use a touch panel and voice commands as the interface, such interfaces are solely smartphone extensions and are not optimized for HMD. Recently, gaze was proposed for an HMD user interface [2], but gaze cannot realize a natural user interface and experience (UI/UX), due to its limited interactivity and lengthy gaze-calibration time (several minutes). In this paper, as shown in Fig. 14.1.1, gesture and speech recognition are proposed as natural UI/UX, based on: 1) speech pre-processing: 2-channel ICA (independent component analysis), speech selection, and noise cancellation and 2) gesture pre-processing: depth/color-map generation, hand detection, hand segmentation, and noise cancellation. This paper presents a low-power natural UI/UX processor with an embedded deep-learning core (NINEX) to provide wearable AR for HMD users without calibration. Moreover, it provides higher recognition accuracy than previous work [3].

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call