Abstract

AbstractSimulations of pesticide fate in soils are often based on persistence models developed nearly 30 years ago. These models predict dissipation in the field on a daily basis by correcting laboratory degradation half‐lives for actual soil temperature and moisture content. They have been extensively applied, but to date no attempt has been made to evaluate existing studies in a consistent, quantitative way. This paper reviews 178 studies comparing pesticide soil residues measured in the field with those simulated by persistence models. The simulated percentage of initial pesticide concentration at the time of 50% measured loss was taken as a common criterion for model performance. The models showed an overall tendency to overestimate persistence. Simulated values ranged from 12 to 96% of initial pesticide concentrations with a median of 60%. Simulated soil residues overestimated the target value (50% of initial) by more than a factor of 1.25 in 44% of the cases. An underestimation by more than a factor of 1.25 was found in only 17% of the experiments. Discrepancies between simulated and observed data are attributed to difficulties in characterizing pesticide behavior under outdoor conditions using laboratory studies. These arise because of differences in soil conditions between the laboratory and the field and the spatial and temporal variability of degradation. Other possible causes include losses in the field by processes other than degradation, deviations of degradation from first‐order kinetics, discrepancies between simulated and actual soil temperature and moisture content, and the lack of soil‐specific degradation parameters. Implications for modeling of pesticide behavior within regulatory risk assessments are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call