Abstract

Artificial intelligence (AI) has become frequently used in data and knowledge production in diverse domain studies. Scholars began to reflect on the plausibility of AI models that learn unexplained tacit knowledge, spawning the emerging research field, eXplainable AI (XAI). However, superior XAI approaches have yet to emerge that can explain the tacit knowledge acquired by AI models into human-understandable explicit knowledge. This paper proposes a novel eXplainable Dimensionality Reduction (XDR) framework, which aims to effectively translate the high-dimensional tacit knowledge learned by AI into explicit knowledge that is understandable to domain experts. We present a case study of recognizing the ethnic styles of village dwellings in Guangdong, China, via an AI model that can recognize the building footprints from satellite imagery. We find that the patio, size, length, direction and asymmetric shape of the village dwellings are the key to distinguish Canton, Hakka, Teochew or their mixed styles. The data-derived results, including key features, proximity relationships and geographical distribution of the styles are consistent with the findings of existing field studies. Moreover, an evidence of Hakka migration was also found in our results, complementing existing knowledge in architectural and historical geography. This proposed XDR framework can assist experts in diverse fields to further expand their domain knowledge.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call