Abstract

Speech input is a primary method of interaction for blind mobile device users, yet the process of dictating and reviewing recognized text through audio only (i.e., without access to visual feedback) has received little attention. A recent study found that sighted users could identify only about half of automatic speech recognition (ASR) errors when listening to text-to-speech output of the ASR results. Blind screen reader users, in contrast, may be better able to identify ASR errors through audio due to their greater use of speech interaction and increased ability to comprehend synthesized speech. To compare the experiences of blind and sighted users with speech input and ASR errors, as well as to compare their ability to identify ASR errors through audio-only interaction, we conducted a lab study with 12 blind and 12 sighted participants. The study included a semi-structured interview portion to qualitatively understand experiences with ASR, followed by a controlled speech input task to quantitatively compare participants’ ability to identify ASR errors in their dictated text. Findings revealed differences between blind and sighted participants in terms of how they use speech input and their level of concern for ASR errors (e.g., blind users were more highly concerned). In the speech input task, blind participants were able to identify only 40% of ASR errors, which, counter to our hypothesis, was not significantly different from sighted participants’ performance. In depth analysis of speech input, ASR errors, and strategy of identifying ASR errors scrutinized how participants entered a text with speech input and reviewed it. Our findings indicate the need for future work on how to support blind users in confidently using speech input to generate accurate, error-free text.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call