Machine learning is a prominent and highly effective field of study, renowned for its ability to yield favorable outcomes in estimation and classification tasks. Within this domain, artificial neural networks (ANNs) have emerged as one of the most powerful methodologies. Physics-informed neural networks (PINNs) have proven particularly adept at solving physics problems formulated as differential equations, incorporating boundary and initial conditions into the ANN’s loss function. However, a critical challenge in ANNs lies in determining the optimal architecture, encompassing the selection of the appropriate number of neurons and layers. Traditionally, the Single Multiplicative Neuron Model (SMNM) has been explored as a solution to this issue, utilizing a single neuron with a multiplication function in the hidden layer to enhance computational efficiency. This study initially aimed to apply the SMNM within the PINNs framework, targeting the differential equation y′-y=0\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$y'-y=0$$\\end{document} with boundary conditions y(0)=1\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$y(0) = 1$$\\end{document} and y(1)=e\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$y(1) = e$$\\end{document}. Upon implementation, however, it was discovered that while the conventional SMNM approach was theorized to offer significant advantages, multiplicative aggregate function led to a failure in convergence. Consequently, we introduced a “mimic single multiplicative neuron model (mimic-SMNM)” employing an architecture with a single neuron, designed to simulate the SMNM’s conceptual advantages while ensuring convergence and computational efficiency. Comparative analysis revealed that the real-PINNs accurately solved the equation, the true SMNM failed to converge, and the mimic model was highlighted for its architectural simplicity and computational feasibility, directly implying it is faster and more efficient than real PINNs for the solution of simple differential equations. Furthermore, our findings demonstrated that our proposed mimic-SMNM model achieves a five-times increase in computational speed compared to real PINNs after 30,000 epochs.
Read full abstract