Abstract

Inclusion of pesticide monitoring data in pesticide risk assessment is important yet challenging for several reasons, including infrequent or irregular data collection, disparate sources procedures and associated monitoring periods, and interpretation of the data itself in a policy context. These challenges alone, left unaddressed, will likely introduce unintentional and unforeseen risk assessment conclusions. While individual water quality monitoring programs report standard operating procedures and quality control practices for their own data, cross-checking data for duplicated data from one database to another does not routinely occur. Consequently, we developed a novel quality control and assurance methodology to identify errors and duplicated records toward creating an aggregated, single pesticide database toward use in ecological risk assessment. This methodology includes (1) standardization and reformatting practices, (2) data error and duplicate record identification protocols, (3) missing or inconsistent limit of detection and quantification reporting, and (4) site metadata scoring and ranking procedures to flag likely duplicate records. We applied this methodology to develop an aggregated (multiple-source), national-scale database for atrazine from a diverse set of surface water monitoring programs. The resultant database resolved and/or removed approximately 31% of the total ~ 385,000 records that were due to duplicated records. Identification of sample replicates was also developed. While the quality control and assurances methodologies developed in this work were applied to atrazine, they generally demonstrate how a properly constructed and aggregated single pesticide database would benefit from the methods described herein before use in subsequent statistical and data analysis or risk assessment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call