Abstract
Children with physical impairments may face challenges during play due to limitations in reaching and handling objects. Telerobotic systems that provide guidance towards toys may help provide access to play, but intuitive methods to control the guidance are required. As a first step towards this, adults without physical impairments tested two eye gaze interfaces. One was an attentive user interface that predicts the target toy that users want to reach using a neural network, trained to recognize the movements performed on the user-side robot and the user’s point of gaze. The other interface was an explicit eye input interface that detects the toy that a user fixates on for at least 500[Formula: see text]ms. This study compared the performance and advantages of each interface in a whack-a-mole game. The purpose was to test the feasibility of activating haptic guidance towards toys with an attentive interface and to assure the safety of the system before children use it. The prediction accuracy of the attentive interface was 86.4% on average, compared to 100% with the explicit interface, thus, seven participants preferred using the explicit interface over the attentive interface. However, using the attentive user interface was significantly faster, and it was less tiring on the eyes. Ways to improve the accuracy of the attentive eye gaze interface are suggested.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.