In the last decade, the Natural History Museum, UK (NHM), has been at the forefront of the digitisation of natural history collections, with almost six million of its 80 million specimens digitised. This momentous undertaking has led to numerous innovations on how to optimise digitisation workflows. One avenue that is currently being explored is the use of collaborative robots—cobots. Since acquiring a Techman TM5 900 robotic arm in 2023 (Scott et al. 2023), we have been experimenting with its capabilities. Experiments began with simple pick-and-place tasks, using artificial specimens. Next, we focused on two use-cases, based on the digitisation of shark teeth and pinned-insects. Both shark teeth and pinned insects are in abundance at the NHM, making their manual digitisation a tedious task. Currently, we have trained the cobot to pick up a specimen, move it elsewhere to photograph and scan the specimen, then move it back to its original position or to a new place. Thus far, this has all been coordinate-based. Focusing on pinned insect specimens, we have now begun training deep learning models to perform segmentation, classification, and tracking tasks on images and manually taken videos. Segmentation and classification tasks range from distinguishing specimens from one another within drawers, to classifying different pins, labels, and insects. Meanwhile, object tracking methods are utilised to track labels from videos taken around the specimen. By tracking different labels simultaneously from multiple frames, we can combine the views of the labels in order to obtain a full picture for each label (for example, using tools described in Salili-James et al. 2022). Thus far, our machine learning pipelines have proved successful, for example, with F1 scores of 96–98% to classify and segment insects and to locate pin heads from dorsal views. Soon, we will be establishing workflows that integrate computer vision (CV) and machine learning (ML) techniques directly with the robotic arm, with pipelines that could be applied to different datasets, and that can significantly enhance efficiency. Broadly, these pipelines can be split into four sections: Specimen Identification: CV/ML to locate individual specimens or certain parts of specimens e.g., pinheads within pinned insects. Handling: With custom grippers, the cobot can delicately pick up, move, and place specimens, to and from photography stations for high quality scanning. Imaging & Scanning: The cobot will scan and photograph specimens. As the camera moves around the specimen, ML is used to segment and track the specimen labels to image them from the optimal views. A built-in Optical Character Recognition process can also be integrated to perform automatic transcription from here. Identifiers: This step requires the cobot to attach identifier labels to specimens or drawers, after locating the optimal position to do this, using CV/ML. Specimen Identification: CV/ML to locate individual specimens or certain parts of specimens e.g., pinheads within pinned insects. Handling: With custom grippers, the cobot can delicately pick up, move, and place specimens, to and from photography stations for high quality scanning. Imaging & Scanning: The cobot will scan and photograph specimens. As the camera moves around the specimen, ML is used to segment and track the specimen labels to image them from the optimal views. A built-in Optical Character Recognition process can also be integrated to perform automatic transcription from here. Identifiers: This step requires the cobot to attach identifier labels to specimens or drawers, after locating the optimal position to do this, using CV/ML. In this talk, we will discuss the progress of the NHM’s cobot research and explore the future of robotics for the digitisation of natural history collections.
Read full abstract