Abstract
This study assessed the suitability of item response theory (IRT) for medical examination data. The specific purposes were (I) to see whether the American Board of Internal Medicine (ABIM) Certifying Examination data met IRT model assumptions and (2) to apply the one-parameter and three-parameter IRT models to the data and observe whether the expected benefits were obtained Analysis of examinees' responses to single-best-answer items supported the general assumptions of local independence, unidimensionality, and nonspeededness. The specific assumptions of the three-parameter model were met, in that items differed in discrimination and guessing. The estimated ability and item parameters were not initially as Tables as hoped, because of a few poorly estimated parameters and possibly the homogeneity of the examinee group and consequently the large number of items with poor discrimination. Future work needs to include content experts to help understand why some items do not fit and to ensure that the retained items result in a content-valid examination.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.