Abstract

ABSTRACTHundreds of Libraries, Archives, and Museums (LAMs) around the world run crowdsourced transcription projects in order to engage users with their collections. Some LAMs explicitly use crowdsourcing projects to make non‐machine‐readable images of documents, such as manuscripts, discoverable to people who are blind or have low vision. We present findings from Crowdsourced Data: Accuracy, Accessibility, Authority (CDAAA), a 3‐year Institute of Museum and Library Services (IMLS) grant project that investigates whether and how LAMs integrate crowdsourced transcriptions into their discovery systems, and whether these efforts result in accessible web‐content for blind people and those with low vision who use assistive technology to navigate the web. We share research findings as well as practical suggestions for those in charge of crowdsourcing projects, the resulting transcription data, or similar web‐based textual content such as scholarly editions. These research and practice‐oriented findings are relevant to any national or local context where inaccessible images are transcribed, and are especially timely in the US context given recent Federal rule‐making to ensure that all web and app‐based content provided by US State and local governments is accessible, including tools, resources, and content created in‐house, through contracts or by license (2024).

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.