Abstract

Abstract Argo floats have significantly improved the observation of the global ocean interior, but as the size of the database increases, so does the need for efficient tools to perform reliable quality control. It is shown here how the classical method of optimal analysis can be used to validate very large datasets before operational or scientific use. The analysis system employed is the one implemented at the Coriolis data center to produce the weekly fields of temperature and salinity, and the key data are the analysis residuals. The impacts of the various sensor errors are evaluated and twin experiments are performed to measure the system capacity in identifying these errors. It appears that for a typical data distribution, the analysis residuals extract 2/3 of the sensor error after a single analysis. The method has been applied on the full Argo Atlantic real-time dataset for the 2000–04 period (482 floats) and 15% of the floats were detected as having salinity drifts or offset. A second test was performed on the delayed mode dataset (120 floats) to check the overall consistency, and except for a few isolated anomalous profiles, the corrected datasets were found to be globally good. The last experiment performed on the Coriolis real-time products takes into account the recently discovered problem in the pressure labeling. For this experiment, a sample of 36 floats, mixing well-behaved and anomalous instruments of the 2003–06 period, was considered and the simple test designed to detect the most common systematic anomalies successfully identified the deficient floats.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call