Abstract

Convolutional neural networks have been successfully used in a variety of tasks and recently have been adapted to improve processing steps in Particle-Image Velocimetry (PIV). Recurrent All-Pairs Fields Transforms (RAFT) as an optical flow estimation backbone achieve a new state-of-the-art accuracy on public synthetic PIV datasets, generalize well to unknown real-world experimental data, and allow a significantly higher spatial resolution compared to state-of-the-art PIV algorithms based on cross-correlation methods. However, the huge diversity in dynamic flows and varying particle image conditions require PIV processing schemes to have high generalization capabilities to unseen flow and lighting conditions. If these conditions vary strongly compared to the synthetic training data, the performance of fully supervised learning based PIV tools might degrade. To tackle these issues, our training procedure is augmented by an unsupervised learning paradigm which remedy the need of a general synthetic dataset and theoretically boosts the inference capability of a deep learning model in a way being more relevant to challenging real-world experimental data. Therefore, we propose URAFT-PIV, an unsupervised deep neural network architecture for optical flow estimation in PIV applications and show that our combination of state-of-the-art deep learning pipelines and unsupervised learning achieves a new state-of-the-art accuracy for unsupervised PIV networks while performing similar to supervisedly trained LiteFlowNet based competitors. Furthermore, we show that URAFT-PIV also performs well under more challenging flow field and image conditions such as low particle density and changing light conditions and demonstrate its generalization capability based on an outof-the-box application to real-world experimental data. Our tests also suggest that current state-of-the-art loss functions might be a limiting factor for the performance of unsupervised optical flow estimation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.