Abstract

The Oddy test is an accelerated ageing test used to determine whether a material is appropriate for the storage, transport, or display of museum objects. The levels of corrosion seen on coupons of silver, copper, and lead indicate the material’s safety for use. Although the Oddy test is conducted in heritage institutions around the world, it is often critiqued for a lack of repeatability. Determining the level of corrosion is a manual and subjective process, in which outcomes are affected by differences in individuals’ perceptions and practices. This paper proposes that a more objective evaluation can be obtained by utilising a convolutional neural network (CNN) to locate the metal coupons and classify their corrosion levels. Images provided by the Metropolitan Museum of Art (the Met) were labelled for object detection and used to train a CNN. The CNN correctly identified the metal type and corrosion level of 98% of the coupons in a test set of the Met’s images. Images were also collected from the American Institute for Conservation’s Oddy test wiki page. These images suffered from low image quality and were missing the classification information needed to train the CNN. Experts from cultural heritage institutions evaluated the coupons in the images, but there was a high level of disagreement between expert classifications. Therefore, these images were not used to train the CNN. However, the images proved useful in testing the limitations of the CNN trained on the Met’s data when applied to images of coupons from different Oddy test protocols and photo documentation procedures. This paper presents the effectiveness of the CNN trained on the Met’s data to classify Met and non-Met Oddy test coupons. Finally, this paper proposes the next steps needed to produce a universal CNN-based classification tool.Graphic

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call