Abstract

The Internet is increasingly used as a medium for gathering and exchanging health information exchange. Healthcare professionals and organizations need to consider barriers that may exist within their patient-oriented Web applications. One approach to making the Web more accessible for those with lower health literacy may be to supplement textual content with audio annotation using text-to-speech engines, allowing for the creation of a virtual surrogate reader. One challenge is that with numerous text-to-speech engines on the market, objective measures of quality are difficult to obtain. To facilitate comparisons of text-to-speech engines, we developed an open-source Web application that measures user reaction times, subjective quality ratings, and accuracy in completing tasks across different audio files created by text-to-speech engines. Our research endeavor was successful in building and piloting this Web application; significant differences were found for subjective ratings of quality across three text-to-speech engines priced at different levels. However, no significant differences were found with reaction times or accuracy between these text-to-speech engines. Future avenues of research include exploring more complex tasks, usability issues related to implementing text-to-speech features, and applied health promotion and education opportunities among vulnerable populations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.