Psychological tests with a conventional format present the same set of items to each test taker. Such tests have been popular because they are easy to administer in paper-andpencil form to large groups of test takers at the same time. When testing became computerized in the early 1990s, the initial applications had the same linear format, though it did not take too long for computer-based testing to become adaptive. In an adaptive format, the computer updates the score of the test taker each time (s)he produces a response to an item and automatically adapts the choice of the next item to the last score. Statistically, adaptive testing requires comparability of test scores across different selections of items, and by the time computer-based testing began to mature, itemresponse theory had already become fully developed and was able to provide just that. As argued in the opening article of this special issue, these first applications in fact returned to the same principles of testing that any proctor involved in the testing of individual subjects followed before testing became fully standardized: immediate judgment of the quality of the responses and avoidance of questions that appear too easy or too difficult for the subject. The advantages of adaptive testing are a much shorter test length to realize the same level of score precision as for a linear test, uniform precision over a much wider range of the ability being tested, more motivated test takers because they are challenged by the items but do not find them unduly difficult, and higher security, there being no paperand-pencil forms to be compromised (albeit less security if the test can be taken continuously in high-stakes applications). In addition, adaptive testing shares the benefits of any type of computer-based testing, such as immediate scoring, the possibility of using innovative item types, and the electronic monitoring of test takers as well as bookkeeping of their performances. These gains are only realized, however, when the testing agent is willing to invest in extensive item writing and calibration as well as the adequate software and hardware required to run the test. The motivation for this special issue of the Zeitschrift fur Psychologie / Journal of Psychology was to demonstrate the benefits of adaptive models for psychological testing and to explore the issues associated with their implementation. Until recently, the main technological advances and applications of computerized adaptive testing lay in the domain of educational testing. This development was surprising because standardized testing actually started in psychology. Also, item-response theory – a necessary tool for running adaptive testing programs – was developed in psychometrics, one of the earliest subdisciplines to emerge in psychology. The outline of the issue is as follows: the opening article, “Some New Developments in Adaptive Testing Technology” (van der Linden), highlights the historic roots of adaptive testing and introduces its main principles. It then reviews some of the newer developments in its technology, such as constrained adaptive testing, rule-based generation of item pools for adaptive testing, multidimensional adaptive testing, optimal sequencing of adaptive test batteries, and the use of response times to improve adaptive testing. The next two articles, “Computerized Adaptive Testing of Personality Traits” by A. Michiel Hol, Harry C.M. Vorst, and Gideon J. Mellenbergh; and “Transitioning from Fixed-Length Questionnaires to Computer Adaptive Testing” by Otto B. Walter and Heinz Holling, show the advantages of, and the efforts involved in, creating an adaptive version of a conventional linear instrument. The example used by Hol et al. is a Dutch version of the dominance scale of Gough and Heilbrun’s Adjective Check List (ACL) with Likert response scales with five categories. Walter and Holling transformed a German translation of the Interpersonal Competence Questionnaire (ICQ) into an adaptive instrument and evaluated the impact of the transition on the test scores. The articles “Computer Adaptive-Attribute Testing: A New Approach to Cognitive Diagnostic Assessment” by Mark J. Gierl and Jiawen Zhou; and “ICAT: An Adaptive Testing Procedure to Allow the Identification of Idiosyncratic Knowledge Patterns” by G. Gage Kingsbury and Ronald L. Houser explore the possibilities of an adaptive testing format for cognitive diagnosis. Relying heavily on developments in