Abstract

Everyday, consumer-level technologies such as mobile technologies may lend themselves to be repurposed for special needs of the populations. In this three-part study, we aimed to investigate whether the Amazon Echo™, a popular personal assistant general consumer product, can function as a speaker-independent device that permits the hands-free retrieval of visual supports for children with autism. Phase 1 investigated whether the Echo’s speaker-independent speech recognition system, linked to a proprietary Amazon “Skill,” could retrieve visual supports in order to facilitate direction following (e.g., “put the girl under the bowl”). The accuracy with which the Echo retrieved and delivered visual supports to the iPad was found to be low, suggesting that the Echo cannot function as a speaker-independent speech recognition system. Subsequently, we customized the vocabulary that was delivered to the Echo with the relevant vocabulary and repeated the protocol in phase 2. A significant increase in accuracy was noted. Finally, in phase 3, the experimenter asked the Echo to retrieve visual supports in the presence of a child with autism while monitoring correct implementation of directives based on successful retrieval by the Echo. Results will be discussed in terms of implications for future research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call