A recurrent neural network for particle trajectory prediction in complex flow fields

  • Abstract
  • Literature Map
  • References
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Particle tracking velocimetry (PTV) is a flow velocity measurement technique with great potential to achieve high spatial resolution. It measures the velocity through the Lagrangian trajectories of tracer particles. When the tracer concentration is high, one of the most challenging steps lies in how to track the particles at different times correctly. Errors in particle matching not only interrupt the tracking process but also reduce the accuracy of the obtained velocity field. In the current study, recurrent neural network architectures, which are proficient in dealing with sequential data, are employed to address this issue. Various combinations of long short-term memory (LSTM) and gated recurrent unit (GRU) networks are proposed and compared. The networks are trained and tested using a dataset containing several typical turbulent flow fields, including flow around a cylinder, wake of a hydrofoil, channel flow, and thermal plumes. Three-dimensional trajectories are generated based on computational fluid dynamics simulation results, covering a wide range of particle densities in different complex flow fields. The long trajectories are segmented into short path snippets, with a total number of 150 000, while 80% is used for network training and 20% for validation. The networks are trained to predict the next frame particle positions based on the trajectories in the past 6–10 frames. Based on the parametric comparison, a double-layer LSTM-GRU network with 128 neurons for each has shown the best performance. The results indicate that the accurate prediction rate from the LSTM-GRU network has increased by over 15% compared with two traditional PTV algorithms, including the four-frame best estimation and the relaxation algorithm. In channel flow characterized by twisted particle trajectories, when the particle density is ∼0.03 ppp, the extraction rate of the LSTM-GRU algorithm exceeds 95%. Even in complex turbulent regions of a thermal plume flow field, where the performance of traditional algorithms drops significantly as the particle concentration increases, the extraction rate of the LSTM-GRU algorithm remains higher than 90%. Finally, the prediction algorithm is applied to practical experimental data of a buoyant jet. The results reveal that the proposed network can resolve satisfactory long trajectories with varying loop segmentation lengths between 6 and 10. The proposed LSTM-GRU network has been proven to be robust and computationally efficient, which has significant potential for large-scale particle tracking tasks in a wide range of experimental applications.

ReferencesShowing 10 of 24 papers
  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 34
  • 10.1007/s00348-021-03276-7
Advanced iterative particle reconstruction for Lagrangian particle tracking
  • Aug 1, 2021
  • Experiments in Fluids
  • Tobias Jahn + 2 more

  • Open Access Icon
  • Cite Count Icon 11
  • 10.1364/oe.22.015542
High-accuracy measurement of depth-displacement using a focus function and its cross- correlation in holographic PTV.
  • Jun 18, 2014
  • Optics Express
  • Kyung Won Seo + 1 more

  • Open Access Icon
  • Cite Count Icon 8
  • 10.1088/1361-6501/acc600
Thermographic 3D particle tracking velocimetry for turbulent gas flows
  • Apr 25, 2023
  • Measurement Science and Technology
  • Moritz Stelter + 3 more

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 32
  • 10.3390/s23031481
Text Sentiment Classification Based on BERT Embedding and Sliced Multi-Head Self-Attention Bi-GRU
  • Jan 28, 2023
  • Sensors (Basel, Switzerland)
  • Xiangsen Zhang + 5 more

  • Open Access Icon
  • Cite Count Icon 96
  • 10.1146/annurev-fluid-031822-041721
3D Lagrangian Particle Tracking in Fluid Mechanics
  • Oct 13, 2022
  • Annual Review of Fluid Mechanics
  • Andreas Schröder + 1 more

  • Cite Count Icon 5
  • 10.1088/1361-6501/ac6934
Volumetric particle tracking velocimetry with improved algorithms using a two-view shadowgraph system
  • May 3, 2022
  • Measurement Science and Technology
  • Y Wu + 2 more

  • Open Access Icon
  • 10.1109/jstars.2024.3502160
CV-Cities: Advancing Cross-View Geo-Localization in Global Cities
  • Jan 1, 2025
  • IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
  • Gaoshuang Huang + 3 more

  • Open Access Icon
  • Cite Count Icon 79
  • 10.1364/optica.6.000506
Digital video microscopy enhanced by deep learning
  • Apr 17, 2019
  • Optica
  • Saga Helgadottir + 2 more

  • 10.1016/j.engappai.2025.110761
DeepTrackStat: An end-to-end deep learning framework for extraction of motion statistics from videos of particles
  • Aug 1, 2025
  • Engineering Applications of Artificial Intelligence
  • Marc Berghouse + 1 more

  • Open Access Icon
  • Cite Count Icon 7
  • 10.1063/5.0060644
Lagrangian coherent track initialization
  • Sep 1, 2021
  • Physics of Fluids
  • Ali Rahimi Khojasteh + 3 more

Similar Papers
  • Research Article
  • Cite Count Icon 82
  • 10.1016/j.egyr.2022.07.139
Multivariate time series prediction by RNN architectures for energy consumption forecasting
  • Aug 3, 2022
  • Energy Reports
  • Ibtissam Amalou + 2 more

Multivariate time series prediction by RNN architectures for energy consumption forecasting

  • Conference Article
  • Cite Count Icon 5
  • 10.1109/icacccn.2018.8748691
Meta-heuristic based Optimized Deep Neural Network for Streaming Data Prediction
  • Oct 1, 2018
  • Puneet Kumar + 1 more

The categories and quantity of data are expanding exponentially with the on-going wave of connectivity. A number of connected devices and data sources continuously generate a huge amount of data at a very high speed. This paper investigates various methods such as - Naive Bayes classifier, Very Fast Decision Trees (VFDT), ensemble methods, clustering based methods, etc. that have been used for streaming data processing. In this paper, recurrent neural network (RNN) is implemented topredict the next sequence of a data stream. Three types of sequential data streams are considered - uniform rectangular data, uniform sinusoidal data and non-uniform sinc pulse data. Various RNN architectures such as - simple RNN, RNN with long short term memory (LSTM), RNN with gated recurrent units (GRU) and RNN optimized with Genetic Algorithm (GA) are implemented for various combinations of number network hyper-parameters such as –number of hidden layers, number of neurons per layer, activation function and optimizer etc. The optimal combination of the hyper-parameters is selected using GA. With sample data streams, simple RNN shows better prediction accuracy than LSTM and GRU for single hidden layer architecture. As the RNN architectures get deeper, LSTM and GRU outperform simple RNN. The optimized version of RNN has been experimentally observed to be 78.13% faster than single layered LSTM architecture and 82.76% faster than the LSTM model with 4 hidden layers. The decline in accuracy is 8.67% and 12.67% respectively.

  • Research Article
  • Cite Count Icon 62
  • 10.1016/j.mex.2024.102946
A critical review of RNN and LSTM variants in hydrological time series predictions
  • Sep 12, 2024
  • MethodsX
  • Muhammad Waqas + 1 more

A critical review of RNN and LSTM variants in hydrological time series predictions

  • Book Chapter
  • Cite Count Icon 4
  • 10.1007/978-3-319-59758-4_21
Recurrent Neural Network Architectures for Event Extraction from Italian Medical Reports
  • Jan 1, 2017
  • Natalia Viani + 8 more

Medical reports include many occurrences of relevant events in the form of free-text. To make data easily accessible and improve medical decisions, clinical information extraction is crucial. Traditional extraction methods usually rely on the availability of external resources, or require complex annotated corpora and elaborate designed features. Especially for languages other than English, progress has been limited by scarce availability of tools and resources. In this work, we explore recurrent neural network (RNN) architectures for clinical event extraction from Italian medical reports. The proposed model includes an embedding layer and an RNN layer. To find the best configuration for event extraction, we explored different RNN architectures, including Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU). We also tried feeding morpho-syntactic information into the network. The best result was obtained by using the GRU network with additional morpho-syntactic inputs.

  • Conference Article
  • Cite Count Icon 147
  • 10.1109/icmla.2018.00035
Network Traffic Prediction Using Recurrent Neural Networks
  • Dec 1, 2018
  • Nipun Ramakrishnan + 1 more

The network traffic prediction problem involves predicting characteristics of future network traffic from observations of past traffic. Network traffic prediction has a variety of applications including network monitoring, resource management, and threat detection. In this paper, we propose several Recurrent Neural Network (RNN) architectures (the standard RNN, Long Short Term Memory (LSTM) networks, and Gated Recurrent Units (GRU)) to solve the network traffic prediction problem. We analyze the performance of these models on three important problems in network traffic prediction: volume prediction, packet protocol prediction, and packet distribution prediction. We achieve state of the art results on the volume prediction problem on public datasets such as the GEANT and Abilene networks. We also believe this is the first work in the domain of protocol prediction and packet distribution prediction using RNN architectures. In this paper, we show that RNN architectures demonstrate promising results in all three of these domains in network traffic prediction, outperforming standard statistical forecasting models significantly.

  • Book Chapter
  • Cite Count Icon 6
  • 10.1007/978-3-030-55180-3_28
Comparison of Hybrid Recurrent Neural Networks for Univariate Time Series Forecasting
  • Aug 25, 2020
  • Anibal Flores + 2 more

The work presented in this paper aims to improve the accuracy of forecasting models in univariate time series, for this it is experimented with different hybrid models of two and four layers based on recurrent neural networks such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU). It is experimented with two time series corresponding to downward thermal infrared and all sky insolation incident on a horizontal surface obtained from NASA’s repository. In the first time series, the results achieved by the two-layer hybrid models (LSTM + GRU and GRU + LSTM) outperformed the results achieved by the non-hybrid models (LSTM + LSTM and GRU + GRU); while only two of six four-layer hybrid models (GRU + LSTM + GRU + LSTM and LSTM + LSTM + GRU + GRU) outperformed non-hybrid models (LSTM + LSTM + LSTM + LSTM and GRU + GRU + GRU + GRU). In the second time series, only one model (LSTM + GRU) of two hybrid models outperformed the two non-hybrid models (LSTM + LSTM and GRU + GRU); while the four-layer hybrid models, none could exceed the results of the non-hybrid models.

  • Research Article
  • Cite Count Icon 1
  • 10.15294/sji.v11i3.4067
Classification Modeling with RNN-based, Random Forest, and XGBoost for Imbalanced Data: A Case of Early Crash Detection in ASEAN-5 Stock Markets
  • Aug 5, 2024
  • Scientific Journal of Informatics
  • Deri Siswara + 2 more

Purpose: This research aims to evaluate the performance of several Recurrent Neural Network (RNN) architectures, including Simple RNN, Gated Recurrent Units (GRU), and Long Short-Term Memory (LSTM), compared to classic algorithms such as Random Forest and XGBoost, in building classification models for early crash detection in the ASEAN-5 stock markets. Methods: The study examines imbalanced data, which is expected due to the rarity of market crashes. It analyzes daily data from 2010 to 2023 across the major stock markets of the ASEAN-5 countries: Indonesia, Malaysia, Singapore, Thailand, and the Philippines. A market crash is the target variable when the primary stock price indices fall below the Value at Risk (VaR) thresholds of 5%, 2.5%, and 1%. Predictors include technical indicators from major local and global markets and commodity markets. The study incorporates 213 predictors with their respective lags (5, 10, 15, 22, 50, 200) and uses a time step of 7, expanding the total number of predictors to 1,491. The challenge of data imbalance is addressed with SMOTE-ENN. Model performance is evaluated using the false alarm rate, hit rate, balanced accuracy, and the precision-recall curve (PRC) score. Result: The results indicate that all RNN-based architectures outperform Random Forest and XGBoost. Among the various RNN architectures, Simple RNN is the most superior, primarily due to its simple data characteristics and focus on short-term information. Novelty: This study enhances and extends the range of phenomena observed in previous studies by incorporating variables such as different geographical zones and periods and methodological adjustments.

  • Preprint Article
  • 10.5194/egusphere-egu24-6773
Estimating Gross Primary Production via Recurrent Neural Networks: A comparative analysis
  • Nov 27, 2024
  • David Montero + 8 more

Quantifying Gross Primary Production (GPP) is fundamental for understanding terrestrial carbon dynamics, particularly in forests. The overarching question we address here is whether integrating remote sensing (RS) with deep learning (DL) methodologies can enhance the estimation of daily forest GPP on a European scale.The Eddy Covariance (EC) method, although widely used to infer ecosystem-scale estimates of GPP from in situ CO2 exchange measurements, suffers from limited global coverage. When EC data are not available, RS data are often employed to estimate GPP by establishing statistical relationships with in situ observations. Recently, Machine Learning (ML) strategies, particularly involving RS and meteorological inputs, have been used for estimating GPP continuously in space and time.However, the potential of DL techniques, particularly those exploiting the sequential characteristics of time series data (i.e. recurrent neural network architectures) in estimating daily forest GPP has not been comprehensively explored. This includes their performance during photosynthetic downregulation (e.g. during climate extremes such as droughts) and an in-depth examination of the importance of the utilised features.This study presents a comparative analysis of three recurrent neural network architectures—Recurrent Neural Networks (RNNs), Gated Recurrent Units (GRUs), and Long-Short Term Memory (LSTMs)—for daily forest GPP estimation across ICOS ecosystem stations. We assess their performance throughout the entire year, during the growing season, and under photosynthetic downregulation events. This assessment utilises RS inputs —optical data from Sentinel-2, Land Surface Temperature (LST) from MODIS, and radar data from Sentinel-1—, combined with potential radiation data. Furthermore, we analysed the importance of these features to provide insights into the complex interactions and dependencies when estimating GPP.Our results indicate that all three architectures yield similar accuracy for full period and growing season GPP estimations, with mean NRMSE values of 0.136 and 0.170, respectively. All models exhibit increased errors while estimating GPP during photosynthetic downregulation events and their performance varies notably, with LSTMs showing the best results (NRMSE=0.202), followed by RNNs (NRMSE=0.214), and GRUs exhibiting the least efficacy (NRMSE=0.276). Additionally, our study underscores the importance of potential radiation as a critical feature influencing the GPP response, with LST data proving particularly valuable when estimating GPP during photosynthetic downregulation events, more so than optical or radar data.These findings suggest that recurrent neural network architectures, especially LSTMs, are effective in daily GPP estimations, with slight performance decreases during climate extreme conditions. The study also reveals the efficacy of combining RS data with potential radiation for accurate forest GPP estimation, with LST data being especially crucial when estimating GPP during photosynthetic downregulation events.Our research paves the way for further exploration into recurrent models and innovative architectures, with an emphasis on overcoming the challenges in modelling climate-induced GPP extremes.

  • Conference Article
  • Cite Count Icon 24
  • 10.1109/ijcnn.2016.7727233
Gated Recurrent Neural Tensor Network
  • Jul 1, 2016
  • Andros Tjandra + 4 more

Recurrent Neural Networks (RNNs), which are a powerful scheme for modeling temporal and sequential data need to capture long-term dependencies on datasets and represent them in hidden layers with a powerful model to capture more information from inputs. For modeling long-term dependencies in a dataset, the gating mechanism concept can help RNNs remember and forget previous information. Representing the hidden layers of an RNN with more expressive operations (i.e., tensor products) helps it learn a more complex relationship between the current input and the previous hidden layer information. These ideas can generally improve RNN performances. In this paper, we proposed a novel RNN architecture that combine the concepts of gating mechanism and the tensor product into a single model. By combining these two concepts into a single RNN, our proposed models learn long-term dependencies by modeling with gating units and obtain more expressive and direct interaction between input and hidden layers using a tensor product on 3-dimensional array (tensor) weight parameters. We use Long Short Term Memory (LSTM) RNN and Gated Recurrent Unit (GRU) RNN and combine them with a tensor product inside their formulations. Our proposed RNNs, which are called a Long-Short Term Memory Recurrent Neural Tensor Network (LSTMRNTN) and Gated Recurrent Unit Recurrent Neural Tensor Network (GRURNTN), are made by combining the LSTM and GRU RNN models with the tensor product. We conducted experiments with our proposed models on word-level and character-level language modeling tasks and revealed that our proposed models significantly improved their performance compared to our baseline models.

  • Research Article
  • Cite Count Icon 92
  • 10.3390/en12224338
State-of-Health Estimation of Li-ion Batteries in Electric Vehicle Using IndRNN under Variable Load Condition
  • Nov 14, 2019
  • Energies
  • Prakash Venugopal + 1 more

In electric vehicles (EVs), battery management systems (BMS) carry out various functions for effective utilization of stored energy in lithium-ion batteries (LIBs). Among numerous functions performed by the BMS, estimating the state of health (SOH) is an essential and challenging task to be accomplished at regular intervals. Accurate estimation of SOH ensures battery reliability by computing remaining lifetime and forecasting its failure conditions to avoid battery risk. Accurate estimation of SOH is challenging, due to uncertain operating conditions of EVs and complex non-linear electrochemical characteristics demonstrated by LIBs. In most of the existing studies, standard charge/discharge patterns with numerous assumptions are considered to accelerate the battery ageing process. However, such patterns and assumptions fail to reflect the real world operating condition of EV batteries, which is not appropriate for BMS of EVs. In contrast, this research work proposes a unique SOH estimation approach, using an independently recurrent neural network (IndRNN) in a more realistic manner by adopting the dynamic load profile condition of EVs. This research work illustrates a deep learning-based data-driven approach to estimate SOH by analyzing their historical data collected from LIBs. The IndRNN is adapted due to its ability to capture complex non-linear characteristics of batteries by eliminating the gradient problem and allowing the neural network to learn long-term dependencies among the capacity degradations. Experimental results indicate that the IndRNN based model is able to predict a battery’s SOH accurately with root mean square error (RMSE) reduced to 1.33% and mean absolute error (MAE) reduced to 1.14%. The maximum error (MAX) produced by IndRNN throughout the testing process is 2.5943% which is well below the acceptable SOH error range of ±5% for EVs. In addition, to demonstrate effectiveness of the IndRNN attained results are compared with other well-known recurrent neural network (RNN) architectures such as long short-term memory (LSTM) and gated recurrent unit (GRU). From the comparison of results, it is clearly evident that IndRNN outperformed other RNN architectures with the highest SOH accuracy rate.

  • Research Article
  • 10.32985/ijeces.15.3.5
Precipitation forecast using RNN variants by analyzing Optimizers and Hyperparameters for Time-series based Climatological Data
  • Jan 1, 2024
  • International journal of electrical and computer engineering systems
  • J Subha + 1 more

Flood is a significant problem in many regions of the world for the catastrophic damage it causes to both property and human lives; excessive precipitation being the major cause. The AI technologies, Deep Learning Neural Networks and Machine Learning algorithms attempt realistic solutions to numerous disaster management challenges. This paper works on RNN- based rainfall/ precipitation forecasting models by investigating the performances of various Recurrent Neural Network (RNN) architectures, Bidirectional RNN (BRNN), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU) and ensemble models such as BRNN-GRU, BRNN-LSTM, LSTM-GRU, BRNN-LSTM-GRU using NASAPOWER datasets of Andhra Pradesh (AP) and Tamil Nadu (TN) in India. The different stages in the workflow of the methodology are Data collection, Data pre-processing, Data splitting, Defining hyperparameters, Model building and Performance evaluation. Experiments for identifying improved optimizers and hyperparameters for the time-series climatological data are investigated for accurate precipitation forecast. The metrics: Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Square Error (RMSE) and Root Mean Squared Logarithmic Error (RMSLE) values are used to compare the precipitation predictions of different models. The RNN variants and ensemble models, BRNN, LSTM, GRU, BRNN-GRU, BRNN-LSTM, LSTM-GRU, BRNN-LSTM-GRU produce predictions with RMSLE values of 2.448, 0.555, 0.255, 1.305, 1.383, 0.364, 1.740 for AP and 1.735, 0.663, 0.152, 0.889, 1.118, 0.379, 1.328 for TN respectively. The best performing RNN model, GRU when ensembled with the existing statistical model SARIMA produces an RMSLE value of 0.754 and 1.677 respectively for AP and TN.

  • Conference Article
  • 10.1109/iccect57938.2023.10140414
An Investigation into the Detection of Human Scratching Activity Based on Deep Learning Models
  • Apr 28, 2023
  • Kevin Wang

Because pruritus is often overlooked and undertreated in the clinical setting, a major unmet need is objective measures of behaviors associated with scratching in order to quantify itch severity and frequency since scratch directly correlates to itch. Such methods to measure itch and how itch severity changes over time are needed to objectively study and understand pruritus, develop and assess the efficacy of new medications, quantify disease severity in patients, and monitor treatment response. Wearable sensors in the form of wrist actigraphy, which detects wrist movements over time using micro-accelerometers, are the most studied and tested method to detect scratching events. To address these issues, 7 deep learning models will be used to train and test for scratch detection, including: Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) – Gated Recurrent Unit (GRU), RNN – Long Short-Term Memory (LSTM), CNN & RNN – GRU (end-to-end), CNN & RNN – LSTM (end-to-end), CNN & RNN – GRU (parallel) and CNN & RNN – LSTM (parallel). The final results show accurately detect scratching using deep learning (CNN achieved a high accuracy of 0.996) in various situations and can provide useful information (time, frequency, scratched body part, etc.) regarding the scratching behavior in day and nighttime in order to better quantify pruritus for use in the medical field.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 9
  • 10.3390/bioengineering11010077
Optimizing RNNs for EMG Signal Classification: A Novel Strategy Using Grey Wolf Optimization.
  • Jan 13, 2024
  • Bioengineering (Basel, Switzerland)
  • Marcos Aviles + 4 more

Accurate classification of electromyographic (EMG) signals is vital in biomedical applications. This study evaluates different architectures of recurrent neural networks for the classification of EMG signals associated with five movements of the right upper extremity. A Butterworth filter was implemented for signal preprocessing, followed by segmentation into 250 ms windows, with an overlap of 190 ms. The resulting dataset was divided into training, validation, and testing subsets. The Grey Wolf Optimization algorithm was applied to the gated recurrent unit (GRU), long short-term memory (LSTM) architectures, and bidirectional recurrent neural networks. In parallel, a performance comparison with support vector machines (SVMs) was performed. The results obtained in the first experimental phase revealed that all the RNN networks evaluated reached a 100% accuracy, standing above the 93% achieved by the SVM. Regarding classification speed, LSTM ranked as the fastest architecture, recording a time of 0.12 ms, followed by GRU with 0.134 ms. Bidirectional recurrent neural networks showed a response time of 0.2 ms, while SVM had the longest time at 2.7 ms. In the second experimental phase, a slight decrease in the accuracy of the RNN models was observed, standing at 98.46% for LSTM, 96.38% for GRU, and 97.63% for the bidirectional network. The findings of this study highlight the effectiveness and speed of recurrent neural networks in the EMG signal classification task.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 95
  • 10.3390/info12110442
Analysis of Gradient Vanishing of RNNs and Performance Comparison
  • Oct 25, 2021
  • Information
  • Seol-Hyun Noh

A recurrent neural network (RNN) combines variable-length input data with a hidden state that depends on previous time steps to generate output data. RNNs have been widely used in time-series data analysis, and various RNN algorithms have been proposed, such as the standard RNN, long short-term memory (LSTM), and gated recurrent units (GRUs). In particular, it has been experimentally proven that LSTM and GRU have higher validation accuracy and prediction accuracy than the standard RNN. The learning ability is a measure of the effectiveness of gradient of error information that would be backpropagated. This study provided a theoretical and experimental basis for the result that LSTM and GRU have more efficient gradient descent than the standard RNN by analyzing and experimenting the gradient vanishing of the standard RNN, LSTM, and GRU. As a result, LSTM and GRU are robust to the degradation of gradient descent even when LSTM and GRU learn long-range input data, which means that the learning ability of LSTM and GRU is greater than standard RNN when learning long-range input data. Therefore, LSTM and GRU have higher validation accuracy and prediction accuracy than the standard RNN. In addition, it was verified whether the experimental results of river-level prediction models, solar power generation prediction models, and speech signal models using the standard RNN, LSTM, and GRUs are consistent with the analysis results of gradient vanishing.

  • Research Article
  • Cite Count Icon 3
  • 10.46338/ijetae0822_15
Performance Evaluation of Recurrent Neural Networks Applied to Indoor Camera Localization
  • Aug 2, 2022
  • International Journal of Emerging Technology and Advanced Engineering
  • Muhammad S Alam + 2 more

Researchers in robotics and computer vision are experimenting with the image-based localization of indoor cameras. Implementation of indoor camera localization problems using a Convolutional neural network (CNN) or Recurrent neural network (RNN) is more challenging from a large image dataset because of the internal structure of CNN or RNN. We can choose a preferable CNN or RNN variant based on the problem type and size of the dataset. CNN is the most flexible method for implementing indoor localization problems. Despite CNN's suitability for hyper-parameter selection, it requires a lot of training images to achieve high accuracy. In addition, overfitting leads to a decrease in accuracy. Introduce RNN, which accurately keeps input images in internal memory to solve these problems. Longshort-term memory (LSTM), Bi-directional LSTM (BiLSTM), and Gated recurrent unit (GRU) are three variants of RNN. We may choose the most appropriate RNN variation based on the problem type and dataset. In this study, we can recommend which variant is effective for training more speedily and which variant produces more accurate results. Vanishing gradient issues also affect RNNs, making it difficult to learn more data. Overcome the gradient vanishing problem by utilizing LSTM. The BiLSTM is an advanced version of the LSTM and is capable of higher performance than the LSTM. A more advanced RNN variant is GRU which is computationally more efficient than an LSTM. In this study, we explore a variety of recurring units for localizing indoor cameras. Our focus is on more powerful recurrent units like LSTM, BiLSTM, and GRU. Using the Microsoft 7-Scenes and InteriorNet datasets, we evaluate the performance of LSTM, BiLSTM, and GRU. Our experiment has shown that the BiLSTM is more efficient in accuracy than the LSTM and GRU. We also observed that the GRU is faster than LSTM and BiLSTM

More from: Physics of Fluids
  • New
  • Research Article
  • 10.1063/5.0297258
On the turbulent flow characteristics of wall-attaching synthetic jets at varying offset height ratios
  • Nov 1, 2025
  • Physics of Fluids
  • Keziah N D Hammond + 3 more

  • New
  • Research Article
  • 10.1063/5.0288726
Numerical investigation on unsteady tip leakage cavitating flow around a pitching flat plate hydrofoil
  • Nov 1, 2025
  • Physics of Fluids
  • Ming Li + 7 more

  • New
  • Research Article
  • 10.1063/5.0291030
Blade shape evaluation for cross-flow tidal turbines—A fast and automated assessment tool for early design stage based on power and load coefficients
  • Nov 1, 2025
  • Physics of Fluids
  • K Ruiz-Hussmann + 4 more

  • New
  • Research Article
  • 10.1063/5.0293498
TensorFlow-based deep neural network framework for predicting peristaltic flow of dissipative Cross fluids in curved porous channels
  • Nov 1, 2025
  • Physics of Fluids
  • Muhammad Talha Tahir + 1 more

  • New
  • Research Article
  • 10.1063/5.0301528
Rapid prediction of submersible hydrodynamic loads in internal solitary waves
  • Nov 1, 2025
  • Physics of Fluids
  • Zhuoyue Li + 7 more

  • New
  • Research Article
  • 10.1063/5.0296579
Numerical study on the pressure distribution and reverse axial flow of journal bearings in deep-sea high-pressure gear pumps
  • Nov 1, 2025
  • Physics of Fluids
  • Liwei Liu + 4 more

  • New
  • Research Article
  • 10.1063/5.0294580
Effect of geometric disorder on immiscible two-phase flow at the pore scale: Displacement patterns and evolution of system operating power
  • Nov 1, 2025
  • Physics of Fluids
  • Jiahong Zhang + 3 more

  • New
  • Research Article
  • 10.1063/5.0293788
Generation of sound waves in a continuous-wave mode microwave atmospheric pressure plasma jet
  • Nov 1, 2025
  • Physics of Fluids
  • Suryasunil Rath + 1 more

  • New
  • Research Article
  • 10.1063/5.0299738
A moments-based regularized lattice Boltzmann method: Incompressible curved boundary implementation and validation in two dimensions
  • Nov 1, 2025
  • Physics of Fluids
  • Sakthivel Munikrishnan + 2 more

  • New
  • Research Article
  • 10.1063/5.0297575
Mechanisms of ion migration in droplet coalescence and non-coalescence behaviors under direct current electric fields
  • Nov 1, 2025
  • Physics of Fluids
  • Xin Huang + 8 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon