ABSTRACTHundreds of Libraries, Archives, and Museums (LAMs) around the world run crowdsourced transcription projects in order to engage users with their collections. Some LAMs explicitly use crowdsourcing projects to make non‐machine‐readable images of documents, such as manuscripts, discoverable to people who are blind or have low vision. We present findings from Crowdsourced Data: Accuracy, Accessibility, Authority (CDAAA), a 3‐year Institute of Museum and Library Services (IMLS) grant project that investigates whether and how LAMs integrate crowdsourced transcriptions into their discovery systems, and whether these efforts result in accessible web‐content for blind people and those with low vision who use assistive technology to navigate the web. We share research findings as well as practical suggestions for those in charge of crowdsourcing projects, the resulting transcription data, or similar web‐based textual content such as scholarly editions. These research and practice‐oriented findings are relevant to any national or local context where inaccessible images are transcribed, and are especially timely in the US context given recent Federal rule‐making to ensure that all web and app‐based content provided by US State and local governments is accessible, including tools, resources, and content created in‐house, through contracts or by license (2024).
Read full abstract