Abstract

AbstractPeople with speech impairments usually use assistive technology devices to assist them with communication and daily tasks. These devices can be controlled using different modalities, such as touch, eye gaze, gestures, and others. This article proposes a standardized methodology for designing non-verbal voice cue interactive systems that enable people with dysarthria to vocally interact with virtual home assistants (VHAs). We adopted a qualitative data-gathering approach to gain insights into users’ experiences and requirements and to determine crucial design elements for designing interactive voice assistants for people with dysarthria. Nineteen participants with varying levels of dysarthria took part in the study to create the framework. A system was then built using the proposed framework, and an additional test was performed with a further seven participants to validate the created system, thus inferring the validity of the framework. Our work empirically demonstrates how an informed, structured design of a fast, direct (verbal rather than forcing users to change modalities or using an intermediate device) method of communication improves the usability of VHAs for people with dysarthria while simultaneously allowing for a more authentic experience. The current data also highlighted that using non-verbal voice cues would be a convenient option. By providing a reproducible framework for developing non-verbal interactive systems for VHAs, we can increase the accessibility of said devices, their usability, and their user experience.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call