Abstract

Abstract Today, it is widely accepted that typology is a biased and inconsistent attempt to classify archaeological material based on the similarity of a predefined set of features. In this respect, machine learning (ML) works similar to typology. ML approaches are often deployed because it is thought that they reduce biases. However, biases are introduced into the process at many points, e.g., feature selection. In a project applying ML to Scandinavian rock art data, it was noticed that the algorithm struggles with classifying certain motifs correctly. This contribution discusses the consistency in applying biases by ML in contrast to the inconsistency of human classification. It is argued that it is necessary to bring machines and humans into a meaningful dialogue attempting to understand why apparent “misclassifications” happen. This is important to inform us about the classification output, our biases, and the rock art data, which are in themself inconsistent, ambiguous, and biased because they are the outcomes of human creativity. The human inconsistency is a necessary component because in rock art not everything that looks similar has a similar meaning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call