Abstract

Deep-sea environmental datasets are ever-increasing in size and diversity, as technological advances lead monitoring studies towards long-term, high-frequency data acquisition protocols. This study presents examples of pre-analysis data treatment steps applied to the environmental time series collected by the Internet Operated Deep-sea Crawler “Wally” during a 7-year deployment (2009–2016) in the Barkley Canyon methane hydrates site, off Vancouver Island (BC, Canada). Pressure, temperature, electrical conductivity, flow, turbidity, and chlorophyll data were subjected to different standardizing, normalizing, and de-trending methods on a case-by-case basis, depending on the nature of the treated variable and the range and scale of the values provided by each of the different sensors. The final pressure, temperature, and electrical conductivity (transformed to practical salinity) datasets are ready for use. On the other hand, in the cases of flow, turbidity, and chlorophyll, further in-depth processing, in tandem with data describing the movement and position of the crawler, will be needed in order to filter out all possible effects of the latter. Our work evidences challenges and solutions in multiparametric data acquisition and quality control and ensures that a big step is taken so that the available environmental data meet high quality standards and facilitate the production of reliable scientific results.

Highlights

  • Our spatio-temporal sampling and observational capabilities are limiting our knowledge of most deep-sea environments [1,2]

  • Long-term time series at frequencies matching biological time-scales are essential in order to expand our understanding of highly complex physical, geochemical and biological phenomena [3,4,5]

  • The complete, processed time series for all variables are available in Supplementary Table S2

Read more

Summary

Introduction

Our spatio-temporal sampling and observational capabilities are limiting our knowledge of most deep-sea environments [1,2]. The issue of the reliability of reference data has been brought up as imperative, in order to avoid biases at the time of parametrization and modeling of large-scale processes [6,7,8]. AnalysesIntaking from traditional, manual data over [9,10,11,12,13,14,15,16,17], from traditional, manual data this over framework, communication and treatment [18,19,20] In this framework, communication and collaboration among scientists, engineers, collaboration among scientists, engineers, and experts in the respective technological field is the only and inin theorder respective technological field rising is the only forward order toindividually tackle the challenges wayexperts forward to tackle the challenges fromway local groupsinworking [21] As datasets are getting bigger and more diverse, data collection, storage, As datasets are getting bigger and and morevisualization diverse, datahave collection, a posteriori treatment, analysis, a posteriori treatment, analysis, to be storage, standardized within a nationally and and visualization have to be standardized within a nationally and globally coordinated, integrated globally coordinated, integrated plan [9,10,11,12,13,14,15,16,17], going towards a future with automated analyses taking plan going towards a future withtreatment automated[18,19,20].

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call