Abstract
This article discusses quality assurance paradigms in the pre and post legal deposit environments, exploring how workflows and processes have adapted from a small-scale, selective model to domain-scale harvesting activity. It draws comparisons between the two approaches and discusses the trade-offs necessitated by the change in scale of web harvesting activity. The requirements of the non-print legal deposit legislation of 2013 and the change in scale in web archiving operations have necessitated new quality metrics for the web archive collection. Whereas it was possible to manually review every instance of a harvested website, the new model requires that more automated methods are employed. The article looks at the tools employed in the selective web archiving model such as the Web Curator Tool and those designed for the legal deposit workflow such as the Annotation and Curation Tool. It examines the key technical issues in archiving websites and how content is prioritized for quality assurance. The article will be of interest to people employed in memory institutions including national libraries who are tasked with preserving online content as well as a wider general audience.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have