Abstract

People think as computers do, and computers have become humanized. It has been a long time since the border between human and computer collapsed, and the interface as a communication channel between them has been diversified and its definitions have grown more complex. In this vein, it is necessary to pay attention to “Explainable Artificial Intelligence (XAI),” which can be referred to as the machine learning technology which constructs a more explainable model while maintaining high performance. Explainable, provable, and transparent AI is very important in establishing trust for computing technologies. In user-centered standpoints, XAI has very significant implications for interdisciplinary AI, as the discussion on the human–AI relationship and interaction deepens. The design of an explanation interface is relevant for effectively providing AI results in user-centered perspectives. It is very important to provide and emphasize information by considering the information selection, assignment, and forms. Explanation ways can be designed and applied according to the properties of AI-based systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call