Glaser and Nitko (1971) define a as that is deliberately constructed to yield measurements that are directly interpretable in terms of performance standards (p. 653). This is probably the best-known definition of a but others have been proposed (e.g., Harris and Stewart, 1971; Ivens, 1970; Kriewall, 1969; and Livingston, 1972). Nothing in the Glaser and Nitko definition, or in most other definitions of criterion-referenced test, necessitates the existence or use of a single or cutting score as a specified performance standard. However, much of the literature subsumed under the heading of measurement does, in fact, postulate the existence of a single cutting score. Since this inconsistency in terminology can lead to confusion, we prefer to reserve the term mastery for a with a single fixed mastery cutting score (see Harris, 1974). Hively (1974) and Millman (1974), among others, suggest using the descriptor test rather than criterion-referenced test. They note that the word criterion is ambiguous in some contexts, and they argue that the word domain provides a more direct specification of the entir, set of items or tasks under consideration. If one accepts these arguments, a mastery can be defined as a domain-referenced with a single cutting score. In this paper we develop and discuss an index of dependability for mastery tests. For reasons discussed later, we choose not to call our index a reliability coefficient, although many of the indices previously developed for mastery tests have been proposed as indices of reliability (see Brennan, 1974, for a review of the literature). For example, Livingston (1972) proposed a reliability coefficient based upon the squared deviations of scores from the cutting score; Swaminathan, Hambleton, and Algina (1974) proposed the use of Cohen's (1960) coefficient K; Marshall and Haertel (1975) suggest using a mean split-half coefficient of agreement; and Carver (1970) suggests two other coefficients.