Abstract

This study delves into the realm of Machine Learning (ML) transparency, with the goal of demystifying intricate model operations in terms of interpretability and explainability. Taking a human-centered design approach, transparency is viewed as a relational aspect between algorithms and users rather than an inherent trait of the ML model. The process involves the pivotal elements of prototyping and user evaluations to arrive at effective transparency solutions. In specialized fields such as medical image analysis, applying human-centered design principles encounters challenges due to limited user access and a knowledge gap between users and ML designers. A systematic review spanning from 2017 to 2023 scrutinized 2307 records, ultimately identifying 78 articles that met the inclusion criteria. The findings underscore the prevailing emphasis on computational feasibility in current transparent ML techniques, often at the expense of considering end users, including clinical stakeholders. Notably, a deficiency exists in formative user research guiding the design and development of transparent ML models. In response to these gaps, we put forth the INTRPRT guideline—a design directive for transparent ML in medical image analysis. Anchored in human-centered design, this guideline underscores the importance of formative user research to comprehend user needs and domain requirements. The ultimate aim is to enhance the likelihood that ML algorithms offer transparency, enabling stakeholders to harness its benefits effectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call