The Computerized Adaptive Practice (CAP) system describes a set of algorithms for assessing player’s expertise and difficulties of in-game problems and for adapting the latter to the former. However, an effective use of CAP requires that in-game problems are designed carefully and refined over time to avoid possible barriers to learning. This study proposes a methodology and three different instruments for analyzing the problem set in CAP-enabled games. The instruments include the Guttman scale, a ranked order, and a Hasse diagram that offer analysis at different levels of granularity and complexity. The methodology proposes to use quantified difficulty measures to infer topology of the problem set. It is well-suited for serious games that emphasize practice and repetitive play. The emphasis is put on the simplicity of use and visualization of the problem space to maximally support teachers and game developers in designing and refining CAP-enabled games. Two case studies demonstrate practical applications of the proposed instruments on empirical data. Future research directions are proposed to address potential drawbacks.