Abstract

We present Python Statistical Analysis of Turbulence (P-SAT), a lightweight, Python framework that can automate the process of parsing, filtering, computation of various turbulent statistics, spectra computation for steady flows. P-SAT framework is capable to work with single as well as on batch inputs. The framework quickly filters the raw velocity data using various methods like velocity correlation, signal-to-noise ratio (SNR), and acceleration thresholding method in order to de-spike the velocity signal of steady flows. It is flexible enough to provide default threshold values in methods like correlation, SNR, acceleration thresholding and also provide the end user with an option to provide a user defined value. The framework generates a .csv file at the end of the execution, which contains various turbulent parameters mentioned earlier. The P-SAT framework can handle velocity time series of steady flows as well as unsteady flows. The P-SAT framework is capable to obtain mean velocities from instantaneous velocities of unsteady flows by using Fourier-component based averaging method. Since P-SAT framework is developed using Python, it can be deployed and executed across the widely used operating systems. The GitHub link for the P-SAT framework is: https://github.com/mayank265/flume.git.

Highlights

  • We have developed Python-Statistical Analysis of Turbulence (P-SAT), an open-source, lightweight Python framework that can de-spike the raw velocity time series data obtained from an acoustic Doppler velocimeter (ADV) device using various filtering methods

  • For every row (Line 1, Algorithm 1) that is read by the Python Statistical Analysis of Turbulence (P-SAT) framework, it checks if any of the correlation values for Ui ; Vi ; and Wi is less than the Correlation Threshold (Line 2, Algorithm 1), the corresponding velocity point is marked as spike (Line 3, Algorithm 1)

  • We provide the end user with an open source P-SAT framework that can enable the user to filter the raw velocity time series data obtained from the Nortek Vectrino+ ADV and compute various turbulent parameters

Read more

Summary

Experimental setup and measurements

The dataset used in the present study to test the P-SAT framework, has been obtained from the experimental study carried out by Deshpande and ­Kumar[24]. These values are used for the correlation filtering method It is important for the P-SAT framework that the columns of the input .dat files must be exactly in the same order for the successful execution. For every row (Line 1, Algorithm 1) that is read by the P-SAT framework (a sample row consists of the data shown in Table 4), it checks if any of the correlation values for Ui ; Vi ; and Wi is less than the Correlation Threshold (Line 2, Algorithm 1), the corresponding velocity point is marked as spike (Line 3, Algorithm 1). The P-SAT framework takes each input .dat file as input, does the pre-processing, applies three filters to de-noise the velocity time series data and calculates various turbulent parameters.

Discussion
Conclusion and future work
Methods
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call