Abstract

Sir—In a recent study in Journal of Applied Microbiology, Carmena et al. (2007) refer to ‘infectious doses in humans as low as 30 oocysts for Cryptosporidium and 10 cysts for Giardia’. The term ‘infectious dose’ is not only meaningless but also suggests that lower doses are harmless. Those doses merely represent the lowest doses at which infection was observed in the human volunteer studies. Administering smaller doses to larger numbers of volunteers would have resulted in some infections. Indeed, it is the risk from a single (oo)cyst which is important. At the very least, the use of the term ‘infectious dose’ (ID) should be quantified by presenting the probability of infection by that dose. Thus, for example, the ID50 is the dose required to initiate infection in a host with a 50% probability of success. A point for consideration is the use by Carmena et al. (2007) of the proposed action level of 10–30 oocysts 100 l−1 to make conclusions on the public health risk to the 50 000 population supplied by small water treatment facilities (SWTF). Action levels may not be appropriate, in part, because the group risk is determined by the arithmetic mean oocyst level, which tends to be underestimated by spot sampling. This is because oocyst counts are generally Poisson-log-Normal in distribution such that spot sampling tends to miss those ‘rare but all important’ high count samples. In a simulated model for a waterborne outbreak of cryptosporidiosis, nine out of every ten 100-l spot samples underestimated the risk to the population to some degree, with 33% containing no oocysts (Gale 2000). Thus, there is a one in three chance of a 100-l spot sample suggesting zero risk when in fact there is an outbreak across the population according to the model. Indeed, Craun et al. (1998) analysed data from 12 reported waterborne outbreaks of cryptosporidiosis, and concluded that there was no clear association between the oocyst concentrations measured in the water and the risk of illness. In four of those 12 outbreaks (33%), no oocysts were detected in any 100-l samples. To assess the risks to the population, Carmena et al. (2007) should be focusing on the annual arithmetic mean oocyst concentration in tap water and not whether some tap water samples exceed an action level at certain times of the year. The arithmetic mean for the 82 SWTF tap water samples was 2·3 oocysts 100 l−1 and at least one sample exceeded the action level with 61 oocysts 100 l−1. A theoretical calculation, however, would suggest that dispersion of those oocysts evenly across the whole supply over the whole year would not decrease the group risk, even though all concentrations would be at 2·3 oocysts 100 l−1 and well below the action level. This is because the group risk is directly related to the total number of oocysts in the drinking water, and not how they are distributed in space and time. In contrast, the action level approach relies on there being sufficient spatial/temporal heterogeneity (‘clustering’) in the oocyst distributions, such that the oocyst concentration exceeds the action level at some point. The infectivity of the Cryptosporidium oocyst may differ considerably between human hosts due to acquired protective immunity (Teunis et al. 2002). This is an important consideration in assessing the risks to consumers in the two supplies studied by Carmena et al. (2007). Indeed, consumers supplied by the SWTF should have considerable protection from previous exposure. Risk assessment is about allowing for changes, e.g. in the efficiency of a process such as drinking water treatment. Treatment processes tend to increase the variation in counts within the treated product compared with the raw material because of fluctuations in efficiency between batch (temporal heterogeneity) and within batch (spatial heterogeneity). These represent ‘bad days’ and ‘by-pass’, respectively, and were not considered by Carmena et al. (2007). Furthermore, monitoring programmes tend to overestimate the net pathogen removal by a treatment process because the arithmetic mean level is underestimated more in the treated water than in the raw water by spot sampling (Gale 2000). Thus, the >3-log removal quoted by Carmena et al. (2007) for conventional water treatment facilities (CWTF) may be too optimistic. For protecting public health, the efficiency of treatment over the whole year is key, and in particular detecting the ‘bad days’ when a filter fails. This could have a serious impact on the health of consumers supplied by CWTF for which the 31 100-l spot samples tested by Carmena et al. (2007) each recorded zero oocysts. Those consumers would be expected to have little acquired protective immunity, and therefore greater susceptibility to failures in treatment. Such failures are unlikely to be detected by taking a single 100-l spot sample (collection time 20–30 min) once a month.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.