Collection of high resolution, in situ data using environmental sensors is common in hydrology and other environmental science domains. Sensors are subject to drift, fouling, and other factors that can affect the quality of the measurements and their subsequent use for scientific analyses. The process by which sensor data are reviewed to verify validity often requires making edits in post processing to generate approved datasets. This quality control process involves decisions by technicians, data managers, or data users on how to handle problematic data. In this study, an experiment was designed and conducted where multiple participants performed quality control post processing on the same datasets using consistent guidelines and tools to assess the effect of individual technician on the resulting datasets. The effect of technician experience and training was also assessed by conducting the same procedures with a group of novices unfamiliar with the data and compared results to those generated by a group of experienced technicians. Results showed greater variability between outcomes for experienced participants, which we attribute to novice participants' reluctance to implement unfamiliar procedures that change data. The greatest variability between participants' results was associated with calibration events for which users selected different methods and values by which to shift results. These corrections resulted in variability exceeding the range of manufacturer-reported sensor accuracy. To reduce quality control subjectivity and variability, we recommend that monitoring networks establish detailed quality control guidelines and consider a collaborative approach to quality control in which multiple technicians evaluate datasets prior to publication.