Abstract

Abstract The reservoir simulation is based on the solving of second-order nonlinear Partial Differential Equations (PDEs). Following the high-level of nonlinearity or irregular boundaries, analytical solutions are not applicable to solve the supposed PDEs. To numerically solve the PDEs, applying nonlinear solvers are recommended. Dependencies on derivatives and proper initial guesses are the main disadvantages of classic solvers. To overcome the mentioned obstacles, solving supposed equations based on Adaptive Neural Network (ANN) has been introduced. The algorithm starts by introducing an initial set into the Nonlinear Simultaneous Algebraic Equations (NSAE). The outputs are compared with the desired matrix of zeros to generate the required error. The calculated vectors of errors and its derivation are firstly employed to update the ANN weights through applying the adaption laws, and secondly, create the input vector to run the ANN. The outputs of the ANN are considered as corrections to be made to the initial set. Then, the corrected initial set is reintroduced into equations. The procedure continues iteratively until the outputs of equations meet the required level of accuracy. By taking advantages of the adaptive laws, the outputs of the presented algorithm have successfully been matched with answers of the classic solvers, but with less computational costs. The convergence of the shown algorithm has practically been examined by assuming various mathematical types of initial sets. The implemented algorithm has been robust enough to converge for different forms of the initial sets, even for invalid values like minus numbers. However, records indicate that the convergence rates are strongly dependent on the values of initial sets. Following the sensitivity analysis over the primary model of ANN lead to the optimized network, which could solve the supposed NSAE three times faster. It has been interpreted that the number of neurons (NN), the diagonal coefficient matrix of error (λ), and the adaptive coefficient (Fw) have the most significant impacts on the performance of the algorithm. In contrast to Newton's method as the most well-known nonlinear solver, the launched algorithm does not require any proper initial guesses. Moreover, the absolute independence of computing the partial derivatives of the Jacobian matrix and its inversion, which causes a notable reduction of computational costs, is the other remarkable advantage of the proposed approach. The represented algorithm can be taken as the platform to develop the next generation of simulators working based on machine learning.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.