Construct: Clinical skills are used in the care of patients, including reporting, diagnostic reasoning, and decision-making skills. Written comprehensive new patient admission notes (H&Ps) are a ubiquitous part of student education but are underutilized in the assessment of clinical skills. The interpretive summary, differential diagnosis, explanation of reasoning, and alternatives (IDEA) assessment tool was developed to assess students' clinical skills using written comprehensive new patient admission notes. Background: The validity evidence for assessment of clinical skills using clinical documentation following authentic patient encounters has not been well documented. Diagnostic justification tools and postencounter notes are described in the literature1,2 but are based on standardized patient encounters. To our knowledge, the IDEA assessment tool is the first published tool that uses medical students' H&Ps to rate students' clinical skills. Approach: The IDEA assessment tool is a 15-item instrument that asks evaluators to rate students' reporting, diagnostic reasoning, and decision-making skills based on medical students' new patient admission notes. This study presents validity evidence in support of the IDEA assessment tool using Messick's unified framework, including content (theoretical framework), response process (interrater reliability), internal structure (factor analysis and internal-consistency reliability), and relationship to other variables. Results: Validity evidence is based on results from four studies conducted between 2010 and 2013. First, the factor analysis (2010, n = 216) yielded a three-factor solution, measuring patient story, IDEA, and completeness, with reliabilities of .79, .88, and .79, respectively. Second, an initial interrater reliability study (2010) involving two raters demonstrated fair to moderate consensus (κ = .21–.56, ρ =.42–.79). Third, a second interrater reliability study (2011) with 22 trained raters also demonstrated fair to moderate agreement (intraclass correlations [ICCs] = .29–.67). There was moderate reliability for all three skill domains, including reporting skills (ICC = .53), diagnostic reasoning skills (ICC = .64), and decision-making skills (ICC = .63). Fourth, there was a significant correlation between IDEA rating scores (2010–2013) and final Internal Medicine clerkship grades (r = .24), 95% confidence interval (CI) [.15, .33]. Conclusions: The IDEA assessment tool is a novel tool with validity evidence to support its use in the assessment of students' reporting, diagnostic reasoning, and decision-making skills. The moderate reliability achieved supports formative or lower stakes summative uses rather than high-stakes summative judgments.
Read full abstract