The national licensing examination is used to evaluate the medical competency at the time of graduation, however no study has been performed on the validity of traditional Korean medicine license examination yet. The purpose of this study was to develop learning analytics using item response theory (IRT) to examine the validity and academic competency of the mock test of the national licensing exam. Classical test theory and IRT were used to evaluate the validity of test items, and IRT was used for test validity and competency analysis. The correlation between competency score of 12 subjects was analyzed using Pearson’s correlation. The distribution of students’ latent competencies was examined by gender and administrative group using a Kernel density map, Latent Profile Analysis, and χ<sup>2</sup>. The guessing parameter of 340 items was relatively high, and the information level of 12 subjects were relatively low. Significant correlations (r = 0.49–0.83, <i>p</i> < 0.05) were observed between the competency scores of total and 12 subjects. Two (high and low) latent academic competency groups were identified based on the competency score of 12 subjects. The low academic competency group requiring intensive management has a significantly higher frequency of male students with the experience of academic fail in the seven-year course. This study presented the quantitative learning analytics for the national licensing exam of traditional Korean medicine. The multifaceted item and test validities of the mock license test were provided, and an evidence-based approach to competency-based student management and national licensing exam of traditional Korean medicine was suggested.