To the Editor: Hertz-Picciotto and Delwiche1 analyze the databases of the California Department of Developmental Services (DDS) to ascertain the incidence of autism and determine whether increasing incidence can be explained by changing age at diagnosis and by shifting criteria. They conclude there is a true increase in incidence that cannot be accounted for by these factors. Because an increasing number of investigators are drawing conclusions from these data, it is crucial to identify problems with the datasets and temper overly ambitious conclusions. First, the databases should not be employed to ascertain incidence,2 and yet the authors use both individual client records and quarterly reports to measure incidence. As the DDS explains in its quarterly reports: “Increases in the number of persons reported from one quarter to the next do not necessarily represent persons who are new to the DDS system.”3 “Differences in the numbers from quarter to quarter reflect the net changes between individuals who are newly reported (ie, included in the later report but not included in the earlier report) and individuals who dropped out.”4 Furthermore, caseload numbers from DDS Regional Centers suggest a prevalence rate far below the prevalence estimates in widely accepted studies.5 Thus, the California data seriously underestimate the prevalence of autism, and do not reflect the total population prevalence pool. They should not be used for incidence studies. Second, the authors included in their analysis children between birth and 36 months of age identified in California's Early Start program. Because the goal of Early Start is to identify an at-risk population, the program's criteria are less restrictive than those used for older children and potentially overestimate the rate of autism. This program may also underestimate the children with autism because many evaluators are reluctant to give (and are not required to give) diagnoses. Third, the authors place undue confidence on age of first appearance in the DDS dataset as a proxy for date of diagnosis. They also unjustifiably diminish the influence of qualitative factors, such as increased awareness, geographic disparities in caseload, and an emerging industry of therapies for individuals with an autism classification. The authors also dismiss diagnostic substitution, despite the fact that the total DDS caseload of developmental disabilities has remained stable over the last decade, and despite increasing evidence that substitution significantly increases autism classifications.6 Fourth, classifications are made by dozens of field evaluators throughout California's Regional DDS Centers. DDS has no control over the practices of individual Center diagnostic procedures. Some comply with DDS practice standards, whereas others do not. Although diagnosticians use a standard form (Client Development Evaluation Report), it was not designed to be a research tool. Moreover, the evaluators are not engaged in epidemiologic research. Fifth, the authors note that neither Asperger disorder nor “pervasive developmental disorders not otherwise specified” qualify for “autism” under DDS guidelines, but evaluators routinely include milder cases under the category of autism, to enable these children to receive services. More importantly, neither the Client Development Evaluation Report nor the gold-standard Autism Diagnostic Observation Schedule and Autism Diagnostic Interview-Revised can clearly distinguish among the autism spectrum disorders. DDS autism classifications need to be subjected to the same scrutiny as other databases used for the evaluation and surveillance of disease. Numerous studies highlight the poor diagnostic accuracy in prevalence studies that rely on administrative data.7,8 Do diagnosticians in the California regional centers receive reliability training? To what extent are the gold-standard diagnostic procedures employed? Are the caseload data representative of the general population? Do DDS classifications have positive predictive value? Before researchers make bold interpretations from these data, they must acknowledge the limitations of their scientific value. Roy Richard Grinker Anthropology and the Human Sciences The George Washington University Washington, DC [email protected] Bennett L. Leventhal University of Illinois Chicago, IL