Abstract

Artificial intelligence (AI) systems have increasingly been employed in various industries, including the laundry sector, e.g., to assist the employees sorting the laundry. This study aims to investigate the influence of image-based explanations on the acceptance of an AI system, by using CNNs that were trained to classify color and type of laundry items, with the explanations being generated through Deep Taylor Decomposition, a popular Explainable AI technique. We specifically examined how providing reasonable and unreasonable visual explanations affected the confidence levels of participating employees from laundries in their respective decisions. 32 participants were recruited from a diverse range of laundries, age, experience in this sector and prior experience with AI technologies and were invited to take part in this study. Each participant was presented with a set of 20 laundry classifications made by the AI system. They were then asked to indicate whether the accompanying image-based explanation strengthened or weakened their confidence in each decision. A five-level Likert scale was utilized to measure the impact, ranging from 1 (strongly weakens confidence) to 5 (strongly strengthens confidence). By providing visual cues and contextual information, the explanations are expected to enhance participants' understanding of the AI system's decision-making process. Consequently, we hypothesize that the image-based explanations will strengthen participants' confidence in the AI system's classifications, leading to increased acceptance and trust in its capabilities. The analysis of the results indicated significant main effects for both the quality of explanation and neural network certainties variables. Moreover, the interaction between explanation quality and neural network certainties also demonstrated a notable level of significance.The outcomes of this study hold substantial implications for the integration of AI systems within the laundry industry and other related domains. By identifying the influence of image-based explanations on acceptance, organizations can refine their AI implementations, ensuring effective utilization and positive user experiences. By fostering a better understanding of how image-based explanations influence AI acceptance, this study contributes to the ongoing development and improvement of AI systems across industries. Ultimately, this research seeks to pave the way for enhanced human-AI collaboration and more widespread adoption of AI technologies. Future research in this area could explore alternative forms of visual explanations, to further examine their impact on user acceptance and confidence in AI systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call