Introduction. We provide a cultural heritage informatic study of ‘internet memes’: clusters of documents carrying a collective cultural meaning. Method. Google’s computer vision data from three memes are used to discuss the cultural heritage Google provides for the indexed web using Gini coefficients and word clouds. Results. Our networks suggest that the more reducible sense-data is to a smaller number of labels, the less the internet needs to ‘remember’ to see a ‘meme’. However, when labels are less strongly connected, human inspection is required to interpret if Google captures qualities of memes. Conclusions. We show that Google can interpret aspects of memetic cultural heritage. However, it fails at understanding some cultural information about memes relative to predefined expectation. Computational tools like Google Cloud Vision can augment the heterogeneity of cultural heritage work. However closer inspection from experts is needed to interpret when computational methods represent cultural memory.
Read full abstract