Two improved physics-informed Neural Networks for solving Burgers equation

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Two improved physics-informed Neural Networks for solving Burgers equation

Similar Papers
  • Research Article
  • Cite Count Icon 29
  • 10.1016/j.jcp.2023.112415
Enforcing continuous symmetries in physics-informed neural network for solving forward and inverse problems of partial differential equations
  • Aug 9, 2023
  • Journal of Computational Physics
  • Zhi-Yong Zhang + 3 more

Enforcing continuous symmetries in physics-informed neural network for solving forward and inverse problems of partial differential equations

  • Research Article
  • Cite Count Icon 31
  • 10.1017/dce.2022.24
Scalable algorithms for physics-informed neural and graph networks
  • Jan 1, 2022
  • Data-Centric Engineering
  • Khemraj Shukla + 3 more

Physics-informed machine learning (PIML) has emerged as a promising new approach for simulating complex physical and biological systems that are governed by complex multiscale processes for which some data are also available. In some instances, the objective is to discover part of the hidden physics from the available data, and PIML has been shown to be particularly effective for such problems for which conventional methods may fail. Unlike commercial machine learning where training of deep neural networks requires big data, in PIML big data are not available. Instead, we can train such networks from additional information obtained by employing the physical laws and evaluating them at random points in the space–time domain. Such PIML integrates multimodality and multifidelity data with mathematical models, and implements them using neural networks or graph networks. Here, we review some of the prevailing trends in embedding physics into machine learning, using physics-informed neural networks (PINNs) based primarily on feed-forward neural networks and automatic differentiation. For more complex systems or systems of systems and unstructured data, graph neural networks (GNNs) present some distinct advantages, and here we review how physics-informed learning can be accomplished with GNNs based on graph exterior calculus to construct differential operators; we refer to these architectures as physics-informed graph networks (PIGNs). We present representative examples for both forward and inverse problems and discuss what advances are needed to scale up PINNs, PIGNs and more broadly GNNs for large-scale engineering problems.

  • Conference Article
  • Cite Count Icon 3
  • 10.2118/207800-ms
Physics Informed Neural Networks Based on a Capacitance Resistance Model for Reservoirs Under Water Flooding Conditions
  • Dec 9, 2021
  • Marco Maniglio + 3 more

In recent years great interest has risen towards surrogate reservoir models based on data-driven methodologies with the purpose of speeding up reservoir management decisions. In this work, a Physics Informed Neural Network (PINN) based on a Capacitance Resistance Model (CRM) has been developed and tested on a synthetic and on a real dataset to predict the production of oil reservoirs under waterflooding conditions. CRMs are simple models based on material balance that estimate the liquid production as a function of injected water and bottom hole pressure. PINNs are Artificial Neural Networks (ANNs) that incorporate prior physical knowledge of the system under study to regularize the network. A PINN based on a CRM is obtained by including the residual of the CRM differential equations in the loss function designed to train the neural network on the historical data. During training, weights and biases of the network and parameters of the physical equations, such as connectivity factors between wells, are updated with the backpropagation algorithm. To investigate the effectiveness of the novel methodology on waterflooded scenarios, two test cases are presented: a small synthetic one and a real mature reservoir. Results obtained with PINN are compared with respect to CRM and ANN alone. In the synthetic case CRM and PINN give slightly better quality history matches and predictions than ANN. The connectivity factors estimated by CRM and PINN are very similar and correctly represent the underlying geology. In the real case PINN gives better quality history matches and predictions than ANN, and both significantly outperform CRM. Even though the CRM formulation is too simple to predict the complex behavior of a real reservoir, the CRM based regularization contributes to improving the PINN predictions quality compared to the purely data-driven ANN model. The connectivity factors estimated by CRM and PINN are not in agreement. However, the latter method provided results closer to our understanding of the flooding process after many years of operations and data analysis. All considered, PINN outperformed both CRM and ANN in terms of predictivity and interpretability, effectively combining strengths from both methodologies. The presented approach does not require the construction of a 3D model since it learns directly from production data, while preserving physical consistency. Moreover, it represents a computationally inexpensive alternative to traditional full-physics reservoir simulations which could have vast applications for problems requiring many forward evaluations, like the optimization of water allocation for mature reservoirs.

  • Research Article
  • 10.5902/2179460x89888
Burgers' PINNs with transfer learning by θ-scheme
  • Jan 15, 2025
  • Ciência e Natura
  • Vitória Biesek + 1 more

The Burgers equation is a well-established test case in the computational modeling of several phenomena, such as fluid dynamics, gas dynamics, shock theory, cosmology and others. In this work, we present the application of physics-informed neural networks (PINNs) with a transfer learning approach using the θ-scheme to solve the Burgers' equation. The proposed approach consists of searching for a discrete solution in time through a sequence of artificial neural networks (ANNs). At each time step, the previous ANN transfers its learning to the next network model, which learns the solution in the current time by minimizing a loss function based on the θ-scheme approximation of the Burgers' equation. To test this approach, we present its application to two benchmark problems with known analytical solutions. Compared to usual PINN models, the proposed approach has the advantage of requiring smaller neural network architectures with similar accurate results and potentially decreasing computational costs.

  • Research Article
  • Cite Count Icon 248
  • 10.1029/2021jb023120
Physics‐Informed Neural Networks (PINNs) for Wave Propagation and Full Waveform Inversions
  • Apr 27, 2022
  • Journal of Geophysical Research: Solid Earth
  • Majid Rasht‐Behesht + 3 more

We propose a new approach to the solution of the wave propagation and full waveform inversions (FWIs) based on a recent advance in deep learning called physics‐informed neural networks (PINNs). In this study, we present an algorithm for PINNs applied to the acoustic wave equation and test the method with both forward models and FWI case studies. These synthetic case studies are designed to explore the ability of PINNs to handle varying degrees of structural complexity using both teleseismic plane waves and seismic point sources. PINNs' meshless formalism allows for a flexible implementation of the wave equation and different types of boundary conditions. For instance, our models demonstrate that PINN automatically satisfies absorbing boundary conditions, a serious computational challenge for common wave propagation solvers. Furthermore, a priori knowledge of the subsurface structure can be seamlessly encoded in PINNs' formulation. We find that the current state‐of‐the‐art PINNs provide good results for the forward model, even though spectral element or finite difference methods are more efficient and accurate. More importantly, our results demonstrate that PINNs yield excellent results for inversions on all cases considered and with limited computational complexity. We discuss the current limitations of the method with complex velocity models as well as strategies to overcome these challenges. Using PINNs as a geophysical inversion solver offers exciting perspectives, not only for the full waveform seismic inversions, but also when dealing with other geophysical datasets (e.g., MT, gravity) as well as joint inversions because of its robust framework and simple implementation.

  • Research Article
  • Cite Count Icon 3
  • 10.1016/j.neunet.2025.107166
Physics-informed Neural Implicit Flow neural network for parametric PDEs.
  • May 1, 2025
  • Neural networks : the official journal of the International Neural Network Society
  • Zixue Xiang + 4 more

Physics-informed Neural Implicit Flow neural network for parametric PDEs.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 5
  • 10.3390/w15132320
Physics-Informed Neural Networks-Based Salinity Modeling in the Sacramento–San Joaquin Delta of California
  • Jun 21, 2023
  • Water
  • Dong Min Roh + 11 more

Salinity in estuarine environments has been traditionally simulated using process-based models. More recently, data-driven models including artificial neural networks (ANNs) have been developed for simulating salinity. Compared to process-based models, ANNs yield faster salinity simulations with comparable accuracy. However, ANNs are often purely data-driven and not constrained by physical laws, making it difficult to interpret the causality between input and output data. Physics-informed neural networks (PINNs) are emerging machine-learning models to integrate the benefits of both process-based models and data-driven ANNs. PINNs can embed the knowledge of physical laws in terms of the partial differential equations (PDE) that govern the dynamics of salinity transport into the training of the neural networks. This study explores the application of PINNs in salinity modeling by incorporating the one-dimensional advection–dispersion salinity transport equation into the neural networks. Two PINN models are explored in this study, namely PINNs and FoNets. PINNs are multilayer perceptrons (MLPs) that incorporate the advection–dispersion equation, while FoNets are an extension of PINNs with an additional encoding layer. The exploration is exemplified at four study locations in the Sacramento–San Joaquin Delta of California: Pittsburg, Chipps Island, Port Chicago, and Martinez. Both PINN models and benchmark ANNs are trained and tested using simulated daily salinity from 1991 to 2015 at study locations. Results indicate that PINNs and FoNets outperform the benchmark ANNs in simulating salinity at the study locations. Specifically, PINNs and FoNets have lower absolute biases and higher correlation coefficients and Nash–Sutcliffe efficiency values than ANNs. In addition, PINN models overcome some limitations of purely data-driven ANNs (e.g., neuron saturation) and generate more realistic salinity simulations. Overall, this study demonstrates the potential of PINNs to supplement existing process-based and ANN models in providing accurate and timely salinity estimation.

  • Research Article
  • 10.1149/ma2024-012345mtgabs
Rapid Inverse Parameter Inference Using Physics-Informed Neural Networks
  • Aug 9, 2024
  • Electrochemical Society Meeting Abstracts
  • Malik Hassanaly + 4 more

As Li-ion batteries become more essential in today's economy, tools need to be developed to accurately and rapidly diagnose a battery's internal state-of-health. Using a Li-ion battery's (high-rate) voltage response, it is proposed to determine a battery's internal state through Bayesian calibration.However, Bayesian calibration is notoriously slow and requires thousands of model runs. To accelerate parameter inference using Bayesian calibration, a surrogate model is developed to replace the underlying physics-based Li-ion model. Developing a surrogate model for rapid Bayesian calibration analysis is discussed for both the single particle model (SPM) and the pseudo two-dimensional (P2D) model.Surrogate models are constructed using physics-informed neural networks (PINNs) that encode the influence of internal properties on observed voltage responses. In practice, a neural network can be trained by: 1) using simulation results of the physics-based model (i.e., a data-loss approach); 2) using the residuals of the governing equations themselves (i.e., a physics-loss approach); or 3) using a combination of simulation results and governing equation residuals. In the present work, PINNs are developed using a variety of training losses and neural network architectures. In this analysis, it is shown that a PINN surrogate model can be reliably trained with only physics-informed loss. However, using a coupled data-informed and physics-loss approach produced the most accurate PINNs. Figure~\\ref{fig:spm_2d} illustrates the absolute relative errors of trained PINN networks using several different training losses and neural network architectures.After determining a consistent training strategy for both the SPM and P2D PINN surrogate models, the PINNs are extended to determine additional internal state-of-health parameters. As more and more parameters were introduced, the PINN training suffered from ``the curse of dimensionality", which was mitigated by using a hierarchical training approach (where a PINN trained with fewer variable model parameters was used to train a PINN with more variable model parameters).Next, the high-dimensionality PINN surrogates are then integrated into Bayesian calibration schemes to identify internal Li-ion battery properties from experimentally measured voltages. Interpreting the high-dimensional parameter posteriors is discussed with respect to model error, parameter prior choices, and experimental errors. Figure 1

  • Research Article
  • Cite Count Icon 159
  • 10.1016/j.cma.2021.114474
A novel sequential method to train physics informed neural networks for Allen Cahn and Cahn Hilliard equations
  • Jan 4, 2022
  • Computer Methods in Applied Mechanics and Engineering
  • Revanth Mattey + 1 more

A novel sequential method to train physics informed neural networks for Allen Cahn and Cahn Hilliard equations

  • Research Article
  • Cite Count Icon 41
  • 10.1016/j.jcp.2023.112603
NAS-PINN: Neural architecture search-guided physics-informed neural network for solving PDEs
  • Oct 27, 2023
  • Journal of Computational Physics
  • Yifan Wang + 1 more

NAS-PINN: Neural architecture search-guided physics-informed neural network for solving PDEs

  • Research Article
  • 10.1142/s021987622441010x
Pre-Conditioned Physics-Informed Neural Network for Inverse Problems
  • May 6, 2025
  • International Journal of Computational Methods
  • Yijun Lu + 2 more

Physics-informed neural networks (PINNs) can solve inverse problems exclusively, circumventing the dependence of other existing computational inverse methods on the results of the forward problem numerical solvers. However, it has been found that when the magnitudes of different physical parameters in the inverse problem vary significantly from each other, PINNs may either fail to converge or converge to incorrect solutions, resulting in poor inverse accuracy. Our previous study has revealed that the standard preprocessing in neural networks normalizes only the input and output data, neglecting the newly introduced trainable inverse parameters within the PINN framework. Thereby, it fails to ensure an appropriate scale for the overall trainable parameter space, potentially leading to an inappropriately “flattened” elliptical distribution. In addition, the scale differences of physical parameters affect the loss terms in PINNs, exacerbating the loss imbalance and leading to notable gradient biases. To address these issues, two crucial pre-constraint strategies are proposed in this study: (1) preprocessing is enhanced through nondimensionalization, scaling highest-order derivative coefficients, and selecting PINN inverse parameters to ensure that the magnitude scales of the trainable parameter space are appropriate and the loss terms are in unit scale to mitigate the loss imbalance; (2) the network architecture is constrained by the initial conditions, boundary conditions, and inverse data within the neural network through a constructive function, to eliminate the loss imbalance completely. The proposed method is thus termed as pre-conditioned PINN (PC-PINN), and the effectiveness of the currently proposed PC-PINN method is validated against vanilla PINN and hPINN methods through numerical examples. The research results indicate that the current PC-PINN method can effectively address the preprocessing and loss imbalance problems of PINNs, and its inverse accuracy is an order of magnitude higher than those of the adjusted PINN and hPINN methods.

  • Conference Article
  • 10.1115/imece2024-145263
Physics Informed Deep Neural Networks for Strength Evaluation Based on Shakedown Analysis
  • Nov 17, 2024
  • Songhua Huang + 3 more

In the field of structural engineering and materials science, particularly in aerospace engineering, optimizing structural design for strength while minimizing material usage presents a complex challenge, especially under variable loading conditions. The current research for the first time introduces an approach to strength evaluation through the innovative application of Physics Informed Neural Networks (PINNs) in shakedown analysis. This paper presents a novel methodology that combines the principles of physics-informed machine learning with shakedown analysis’s rigorous demands. Shakedown analysis offers a sophisticated framework for determining the safe load-bearing capacity of structures beyond the conventional elastic limit but within the plastic threshold, without the need to consider the history of loading conditions. This methodology enables engineers to design lighter, more material-efficient structures by safely harnessing the structure’s capacity to withstand loads without reaching failure. Our approach leverages the concept of Physics Informed Neural Networks (PINNs), which integrates differential equations governing physical laws directly into the learning process of deep neural networks. PINNs are further advanced by incorporating self-equilibrating stress field relations, essential for shakedown analysis. This integration enables to accurately predict the shakedown limit strength, crucial for determining a structure’s ability to endure repeated loading without failure. By adding these relations to the mechanical equilibrium equation and constitutive equations within the neural network architecture, it is now offer a comprehensive modeling capability, extending their application to more complex scenarios in solid mechanics, including accurate shakedown limit predictions. The proposed methodology introduces key innovations by extending PINNs to nonlinear problems for complex elastoplastic behavior through shakedown analysis and proposing a multi-network PINN model for more accurate structural response representation. To validate our approach, we employ synthetic data derived from analytical and numerical reference solutions, focusing on convergence behavior and accuracy. Our research highlights the robustness of PINNs in handling sparse data and extrapolating across a wide range of parameters, a critical aspect in the context of shakedown analysis where the design space is vast and complex. This ability to predict accurately under previously unseen conditions not only underscores the potential of PINNs in surrogate modeling but also in the further sensitivity analysis, providing a powerful tool for engineers to explore and optimize structural designs efficiently. Furthermore, the technique is used to determine the shakedown strength for a manned airtight module.

  • Research Article
  • Cite Count Icon 109
  • 10.1016/j.cma.2022.115616
A mixed formulation for physics-informed neural networks as a potential solver for engineering problems in heterogeneous domains: Comparison with finite element method
  • Sep 20, 2022
  • Computer Methods in Applied Mechanics and Engineering
  • Shahed Rezaei + 4 more

A mixed formulation for physics-informed neural networks as a potential solver for engineering problems in heterogeneous domains: Comparison with finite element method

  • Dissertation
  • 10.37099/mtu.dc.etdr/1272
Phase field fracture modeling for chemically strengthened glass and Machine Learning for Reaction-- diffusion equations
  • Jan 1, 2021
  • Revanth Mattey

Chemically Strengthening (CS) of glass through artificial process like ion-exchange has emerged as a leading technique to improve the fracture toughness of glass. In this study a novel finding with regards to fracture resistance has been presented. The stress intensity factor of CS glass with varying initial flaw depths have been measured using experimental, analytical and numerical simulations. The main focus in this thesis is modeling CS glass using finite element simulations with phase field fracture modeling. Through thorough investigation of the numerical, analytical and experimental results it has been observed that the fracture toughness of CS glass varies with factors like intial crack depth and the degree of chemical strengthening. A physics informed neural network (PINN) incorporates the physics of a system by satisfying its boundary value problem through a neural network's loss function. The PINN approach has shown great success in approximating the map between the solution of a partial differential equation (PDE) and its spatio-temporal input. However, for strongly non-linear and higher order partial differential equations PINN's accuracy reduces significantly. To resolve this problem, we propose a novel PINN scheme that solves the PDE sequentially over successive time segments using a single neural network. The key idea is to re-train the same neural network for solving the PDE over successive time segments while satisfying the already obtained solution for all previous time segments. Thus it is named as backward compatible PINN (bc-PINN). To illustrate the advantages of bc-PINN, we have used the Cahn Hilliard and Allen Cahn equations, which are widely used to describe phase separation and reaction diffusion systems. Our results show significant improvement in accuracy over the PINN method while using a smaller number of collocation points. Additionally, we have shown that using the phase space technique for a higher order PDE could further improve the accuracy and efficiency of the bc-PINN scheme.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1224
  • 10.1007/s10915-022-01939-z
Scientific Machine Learning Through Physics–Informed Neural Networks: Where we are and What’s Next
  • Jul 26, 2022
  • Journal of Scientific Computing
  • Salvatore Cuomo + 5 more

Physics-Informed Neural Networks (PINN) are neural networks (NNs) that encode model equations, like Partial Differential Equations (PDE), as a component of the neural network itself. PINNs are nowadays used to solve PDEs, fractional equations, integral-differential equations, and stochastic PDEs. This novel methodology has arisen as a multi-task learning framework in which a NN must fit observed data while reducing a PDE residual. This article provides a comprehensive review of the literature on PINNs: while the primary goal of the study was to characterize these networks and their related advantages and disadvantages. The review also attempts to incorporate publications on a broader range of collocation-based physics informed neural networks, which stars form the vanilla PINN, as well as many other variants, such as physics-constrained neural networks (PCNN), variational hp-VPINN, and conservative PINN (CPINN). The study indicates that most research has focused on customizing the PINN through different activation functions, gradient optimization techniques, neural network structures, and loss function structures. Despite the wide range of applications for which PINNs have been used, by demonstrating their ability to be more feasible in some contexts than classical numerical techniques like Finite Element Method (FEM), advancements are still possible, most notably theoretical issues that remain unresolved.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.