The inference of velocity fields from the displacement of objects and/or fields visible within a series of consecutive images over known time intervals has been explored extensively within experimental fluid dynamics. Real image sequences of environmental hydrodynamic flows, however, pose additional challenges for velocity field inference due to factors such as lighting inhomogeneity, particle density, camera orientation and stability. Here we investigate the performance of classical and deep learning based velocity estimation methods on three experimental datasets; a hydrodynamics laboratory dataset of different flow types and two open-source datasets of aerial river footage from field campaigns. The river datasets are accompanied by observational datasets of in-situ measurements. In particular, we investigate the generalisation of deep learning based methods from ideal training conditions to real images. We consider three deep learning approaches; recurrent all-pairs-field transforms (RAFT), a physics-informed approach and an unsupervised learning approach (UnLiteFlowNet-PIV). Results indicate that RAFT, which achieves state-of-the-art performance on particle image datasets, showed good generalisation to the laboratory dataset and field imagery. The physics-informed approach performed similarly to RAFT across the laboratory dataset whilst generalisation to drone-based data proved challenging. Across the laboratory dataset, UnLiteFlowNet-PIV showed good performance within wake regions but an underestimation of channel flows and freestream regions with limited vorticity, also suffering under poor seeding density. Limited fine-tuning of UnLiteFlowNet-PIV on laboratory data, however, led to improved performance in these regions, indicating the potential of the unsupervised learning approach for environmental flows where 2D ground truth data sources are unavailable for training.
Read full abstract