Abstract

This article discusses quality assurance paradigms in the pre and post legal deposit environments, exploring how workflows and processes have adapted from a small-scale, selective model to domain-scale harvesting activity. It draws comparisons between the two approaches and discusses the trade-offs necessitated by the change in scale of web harvesting activity. The requirements of the non-print legal deposit legislation of 2013 and the change in scale in web archiving operations have necessitated new quality metrics for the web archive collection. Whereas it was possible to manually review every instance of a harvested website, the new model requires that more automated methods are employed. The article looks at the tools employed in the selective web archiving model such as the Web Curator Tool and those designed for the legal deposit workflow such as the Annotation and Curation Tool. It examines the key technical issues in archiving websites and how content is prioritized for quality assurance. The article will be of interest to people employed in memory institutions including national libraries who are tasked with preserving online content as well as a wider general audience.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.