Context and setting Since the Liaison Committee on Medical Education has required that all medical schools in the USA offer cultural competency training, several surveys have been developed to measure cultural competency. However, these cultural competency measurements have neither been compared nor applied to a non-Englishspeaking setting. This report presents the psychometrics of 3 instruments tested in a Taiwanese medical school, where a cultural competency curriculum was recently introduced. Why the idea was necessary Due to globalisation, the ethnic make-up of Taiwan’s population is becoming increasingly diverse. There is an urgent need for cultural competency training in Taiwan, and reliable and valid assessment tools are essential to evaluate its effectiveness. What was done In May 2006 we recruited 90% (237 ⁄262) of our Year 3 and 4 medical students to fill out a survey containing the Inventory for Assessing the Process of Cultural Competence among Healthcare Professionals-Revised (IAPCC-R), which has 5 subscales (cultural awareness, cultural knowledge, cultural skill, cultural encounter and cultural desire), the California Brief Multicultural Competence Scale (CBMCS), which has 4 subscales (multicultural knowledge, awareness of cultural barriers, sensitivity to consumers, sociocultural diversities), and an instrument designed to measure the preparedness of US residents to deliver cross-cultural care (CCC), which has 2 subscales (self-reported preparedness and self-reported skill levels). Within 3 weeks, 78 students volunteered to take the retests. These instruments were chosen because they are reported to have good psychometric properties with large testing sample sizes. We analysed the data with SAS 9.1. Evaluation of results and impact The values of Cronbach’s a for internal consistency coefficient ranged from 0.06 to 0.57 in the IAPCC-R subscales, from 0.76 to 0.92 in the CBMCS subscales, and were 0.95 and 0.96 in the CCC instrument’s subscales. These results indicate that the IAPCC-R does not have good reliability for internal consistency, whereas the CBMCS and the CCC instrument do. Regarding the results of the paired t-tests and the test-retest correlation coefficients, the IAPCC-R and the CBMCS scales do not have good test-retest reliability, whereas the CCC instrument does. The Kaiser)Meyer)Olkin measure of sampling adequacy was computed to examine construct validity and the results indicated that factor analysis was appropriate for all 3 scales. We used parallel analysis to determine the number of factors. Exploratory factor analysis with iterated principal factor extraction and Promax oblique rotation showed that the IAPCC-R does not have an identifiable factor structure. The 2 factors of the CBMCS (correlation coefficient 0.66) were not fully correspondent to the hypothetical 4-factor structure. The results of the CCC instrument revealed that the factor structure was consistent with the hypothetical factor structure (correlation coefficient 0.79). In summary, our results indicate that the CCC instrument has good internal consistency, good test-retest reliability, and good construct validity. The CCC instrument is useful for assessing cultural competency skills, but it does not measure awareness and knowledge. The instruments we tested which contained awareness and knowledge constructs showed problematic reliability and validity, indicating the need to develop better instruments to measure cultural competency in settings like Taiwan.
Read full abstract