Abstract

Diabetic foot wounds, a devastating complication of diabetes, are a major burden on patients and the healthcare system. Training in skills to address such wounds is critical. To support competency-based training of critical wound assessment skills, we developed a Diabetic Wound Assessment Learning Tool (DiWALT). While the original tool demonstrated excellent generalizability (generalizability coefficient of 0.871) and reliability (Phi coefficient of 0.866), based on feedback from raters, its utility was noted to be limited by its lengthy nature (23 items). In this study, we aimed to use item-analysis metrics to refine and shorten the assessment tool and examine validity evidence of the abbreviated versions for workplace-based assessment. In phase 1, data from the DiWALT initial validation study was reanalyzed to ascertain redundant items, and items with poor psychometric performance. We conducted item-total/overall score correlations, discriminative index calculations and a generalizability study. Based on these data, we shortened the DiWALT. Four raters were recruited and conducted 60 video-based simulated assessments using the refined tool. In phase 2, the tool was shortened again using results from phase I. Subsequently, 4 raters conducted 60 video-based simulated assessments using this abbreviated version. A generalizability study was conducted for each phase to assess the reliability of each refinement. Based on discriminative index, item-total correlation and generalizability, the DiWALT was initially shortened to 16 items. Analysis of the 60 assessments conducted with this version yielded a generalizability coefficient of 0.87 and a Phi coefficient of 0.75. Based on these results, the tool was abbreviated to 5 items, one for each DiWALT subscale, making it feasible for use as a short workplace-based assessment. Analysis of the second 60 assessments using the 5-item version revealed a generalizability coefficient of 0.90 and a Phi coefficient of 0.86. We were able to refine the DiWALT in a data-driven manner and improve its generalizability as well as its utility. Of note, we found that the 5-item DiWALT performed superiorly to both the 23- and 16-item tools. This was unexpected; decreasing the number of testing items is expected to result in poorer reliability metrics because any error in scoring is amplified. These results likely reflect how the abbreviated DiWALT functioned superiorly from a rater cognition perspective as fewer assessment items have been shown to decrease the intrinsic cognitive load on raters, improving rater performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call