The development of novel educational assessment models founded on item response theory (IRT), as well as software tools designed to implement these models, has contributed to the surge in computerized adaptive tests (CATs). The distinguishing characteristic of CATs is that the sequence of items on a test progressively adapts to the performance levels of students as they are taking it. An important advantage of CATs is that they can reduce the duration of the assessment by automatically excluding in real time those items that are either too easy or too hard for a student’s capabilities. Furthermore, a CAT can provide real-time feedback to students based on their ongoing performance on the test. More recently, dynamic CATs have emerged that include special features (e.g., graduated prompts, pretest and posttest assessment items, cognitive scaffolding items) to assess the proximal development zone of the students. This allows test administrators to obtain information about the kind and level of mediation required by the students to reach their optimal performance. The following article presents some initial results from the experimental application of a computerized adaptive dynamic assessment battery of reading processes in a sample of Spanish-speaking elementary school students. Specifically, the aim was to analyze the effect of the graduated prompts implemented in a syntactic awareness test on the results obtained. In addition, preliminary results regarding the predictive and incremental validity of dynamic scores on reading competence are presented and discussed.