Abstract

In recent years, advanced machine learning and artificial intelligence techniques have gained popularity due to their ability to solve problems across various domains with high performance and quality. However, these techniques are often so complex that they fail to provide simple and understandable explanations for the outputs they generate. To address this issue, the field of explainable artificial intelligence has recently emerged. On the other hand, most data generated in different domains are inherently structural; that is, they consist of parts and relationships among them. Such data can be represented using either a simple data-structure or form , such as a vector , or a complex data-structure, such as a graph . The effect of this representation form on the explainability and interpretability of machine learning models is not extensively discussed in the literature. In this survey paper, we review efficient algorithms proposed for learning from inherently structured data, emphasizing how their representation form affects the explainability of learning models. A conclusion of our literature review is that using complex forms or data-structures for data representation improves not only the learning performance, but also the explainability and transparency of the model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call