The herein coined Optimal Decomposition Method (ODM) is presented; an analytic decomposition approach to solve nonlinear problems that circumvents the well-known convergence barrier within nonlinear decomposition methods, e.g., Adomian Decomposition Method (ADM) and Duan –Rach ADM. ODM initially concerns removing the unknown coefficients from the recursion scheme. In addition, it is powered by multiple Convergence-Control Parameters (CCP) through insertion of an ‘artificial’ parameter. The series of CCPs results in the deployment of the entire convergence potential of the decomposition method. It is shown, in a detailed manner, that ADM and Duan-Rach ADM, i.e., a modified approach preventing the appearance of the unknown coefficients in the Adomian recursion scheme for an l-point BVP ( l≥2), can be generalized only as the special cases of ODM. Therefore, ODM, by itself, moves beyond generalization of the aforementioned methods. The CCPs are computed by minimizing the universal system error. For this purpose, a gradient-based contraction search is exploited together with Genetic Algorithm (GA) to facilitate procuring the CCPs in strongly divergent examples involving several CCPs. Indeed, through examples, namely, a generalized Boundary Value Problem (BVP) test case, the Falkner-Skan equation, a two-state optimal control problem, the Burgers’ equation and a strongly nonlinear problem governing electrohydrodynamic flow of a fluid in a circular cylindrical conduit, it is exemplified that CCPs expand the radius of convergence substantially. In the considered examples, ADM and D-R ADM are strictly divergent with the addition of series terms; on the contrary, ODM serves as a successful remedy on these occasions. In practice, it was found that for most of the nonlinearities, minimizing the system error containing CCPs for lower orders of approximation gives rise to a natural minimization of the system error in higher orders of approximation, i.e., natural convergence.
Read full abstract