Context:Digital Health (DH) is widely considered essential for sustainable future healthcare systems. Software quality, particularly usability, is crucial for the success and adoption of most DH products. However, concerns about the effectiveness and efficiency of usability evaluation of DH products have been raised. Objective:This article aims to analyse the prevalence and application contexts of usability evaluation methods in DH and to highlight potential issues related to their effectiveness and efficiency. Method:A systematic literature review of usability evaluation studies, published by (academic) practitioners between 2016 and April 2023, was conducted. 610 primary articles were identified and analysed, utilising five major scientific databases. Results:Our findings show a preference for inquiry (85%) and testing (63%) methods, with inspection used less frequently (17%). The published studies employed methods like questionnaires (75%); notably the SUS (49%), semi-structured interviews (25%), and heuristic evaluations (73%), with percentages based on their group. Data collection mainly involved the use of participant feedback (45%), audio/video recordings (44%), and system logs (20%), with both qualitative and quantitative data analyses prevalent in studies. However, several usability characteristics such as accessibility, memorability, and operability were found to be largely overlooked, and automation tools or platforms were not widely used. Among the systems evaluated were mHealth applications (70%), telehealth platforms (36%), health information technology (HIT) solutions (29%), personalized medicine (Per. Med.) (17%), wearable devices (12%), and digital therapeutics (DTx) interventions (6%), with the participation of general users, patients, healthcare providers, and informal caregivers varying based on the health condition studied. Furthermore, insights and experiences gathered from 24 articles underscored the importance of a mixed-method approach in usability evaluations, the limitations of traditional methods, the necessity for sector-specific customisation, and the potential benefits of remote usability studies. Moreover, while eye-tracking emerged as a promising evaluation technique, careful execution and interpretation are crucial to avoid data misinterpretation. Conclusion:The study’s findings showed that employing a combination of inquiry and testing-based methods is prevalent for evaluating DH platforms. Despite an array of DH systems, method distribution remained consistent across platforms and targeted user groups. The study also underlines the importance of involving target user groups in the process. Potentially affected cognitive abilities of participants and potential user groups of interest have to be taken into account when choosing evaluation methods, and methods might therefore need to be tailored. Complementary inspection methods might be particularly useful when recruiting representative participants is difficult. Several potential paths for future research are outlined, such as exploring novel technologies like artificial intelligence, for improved automation tool support in the usability evaluation process.
Read full abstract