A CENTRALIZED test development facility of fers many advantages to educational or training ac tivities such as military service schools or in dustrial training centers. The major advantages are the development of better quality measuring instruments, tighter control over test administra tion, and sounder interpretation and use of test re sults. These advantages are made possible by the employment of a staff of professional test construc tion specialists, with supporting statisticians and clerical personnel, to perform all test development functions for the training activity. However, a centralized system does present some administrative problems. Probably the most critical problem is the development of procedures which will prevent the introduction of error into tests during the mechanical collation and reproduc tion phases of test production. Still another criti cal problem is the development of techniques for maintaining the currency and appropriateness of test items under conditions of constant curriculum change and frequent shifts in the sequence of in struction. The system of test development used until re cently in the United States Army Security Agency Training Center and School followed standard pro cedures. Test items were constructed in accord ance with a preliminary test plan, checked for con tent validity and technical accuracy, and adminis tered on a trial basis. The difficulty and discrim inating power of items were determined by item analysis and used as a basis for item revision. The final test was then constructed in accordance with a revised test plan, specifying the desired content coverage, level of difficulty, and other selected criteria to satisfy the evaluation requirement. The professional task of item selection is time consuming, and the processes of typing test items on cards for filing, on mock-ups for coordination, and on masters for duplication or printing are not only time-consuming but, of greater import, fre quently result in the introduction of error. That is, no matter how carefully the selecting, collating, and transcribing procedures are handled, and re gardless of the number of checks built into the sys tem, a manual method produces error. Any error in a test, of course, reducesthe validity, reliability and objectivity of the instrument. But of equal im portance, the inevitable consequence of error ridden tests is a lowering of student and faculty confidence in the worth of the tests.