Abstract

e13605 Background: This pilot study was designed to explore the use of remote simulation-based research methods to assess NAVIFY Guidelines, including data collection procedures. NAVIFY Guidelines is a software application that digitizes NCCN Guidelines and allows the user to navigate guidelines in a stepwise manner to support clinical decisions. The application has been designed to simplify access and improve adherence to guidelines in the management of oncology patients. Methods: A total of 10 board-certified oncologists were recruited to participate from large academic centers (5), community hospitals (3) and specialist oncology centers (2). Each oncologist reviewed clinical information for 10 synthetic breast cancer patient cases (i.e., 100 case reviews). The cases were developed by a medical oncologist, a surgical oncologist, a radiologist and a histopathologist. Participants were tasked to select the most appropriate clinical decision as per guidelines in two conditions: 1) using a PDF of the NCCN Guidelines (control), and 2) using NAVIFY Guidelines (experimental). Outcome measures were compared across conditions. Sessions were conducted via Google Meet, with participants using the share screen feature to allow the session to be recorded. The cognitive burden of each method was measured via a psychometric test conducted before and after each method (the Stroop Colour and Word Test). After each session, a post-simulation survey gathered feedback on 1) study methods, 2) use of synthetic patient cases, 3) workload effort required (via the NASA Task Load Index), and 4) usability of NAVIFY Guidelines (via the System Usability Scale). A short interview was conducted after each session to gather feedback on the benefits and challenges of using NAVIFY Guidelines and to discuss the participant’s experience of the simulation session. Results: All data collection procedures were successful including 1) assessment of time taken to reach a clinical decision, 2) psychometric testing, 3) survey data, and 4) user feedback. All participants felt the synthetic patient cases were either ‘very similar’ (7 of 10) or ‘somewhat similar’ (3 of 10) to cases seen in routine clinical practice, and the majority felt the task completed during the sessions was ‘very similar’ (3 of 10) or ‘somewhat similar’ (4 of 10). Conclusions: Remote evaluations can be conducted quickly helping to keep pace with product development cycles. The method is easily repeated therefore supporting evaluation of product development over time. In addition, managing simulations remotely enables the possibility of simultaneous evaluations across multiple geographies, and the use of synthetic patient data eliminates data protection concerns. This study suggests that simulation-based research can be used to assess digital health solutions remotely in a timely fashion.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.