Abstract

We present a novel numerical integration technique, Neural Network Integration, or NNI, where shallow neural network design is used to approximate an integrand function within a bounded set. This function approximation is such that a closed-form solution exists to its definite integral across any generalized polyhedron within the network's domain. This closed-form solution allows for fast integral evaluation of the function across different bounds, following the initial training of the network. In other words, it becomes possible to “pre-compute” the numerical integration problem, allowing for rapid evaluation later. Experimental tests are performed using the Genz integration test functions. These experiments show NNI to be a viable integration method, working best on predictable integrand functions, but worse results on singular and non-smooth functions. NNI is proposed as a solution to problems where numerical integrations of higher dimension must be performed over different domains frequently or rapidly and with low memory requirements, such as in real-time or embedded engineering applications. The application of this method to the optimization of integral functions is also discussed.

Highlights

  • The need to integrate across multiple dimensions frequently arises in many engineering, statistics, and financial problems [1]

  • In some cases for this family, we see that Network Integration (NNI) outperforms the adaptive routine. These results lead us to believe that extensions to the current study, such as integrating NNI with a variancebased adaptive sampling algorithm, or intelligently weighting the error terms of the neural network during training, could both lead to higher accuracy results on integrand functions with discontinuities, singularities, or otherwise varying regularity

  • We present a novel numerical integration technique, Neural Network Integration, or NNI

Read more

Summary

INTRODUCTION

The need to integrate across multiple dimensions frequently arises in many engineering, statistics, and financial problems [1]. These parameters were selected to Sloan [2], with the goal of keeping the integration difficulty of each family approximately equal Note that both TRAPZ and ADAPT require specific structure to their sampling points, and as such, the number of integration calls in these methods was sometimes lower than the number of integrand calls N. In some cases for this family, we see that NNI outperforms the adaptive routine These results lead us to believe that extensions to the current study, such as integrating NNI with a variancebased adaptive sampling algorithm, or intelligently weighting the error terms of the neural network during training, could both lead to higher accuracy results on integrand functions with discontinuities, singularities, or otherwise varying regularity. NNI training time would not need to be limited in the same way (see Section IV), and training could be done over long periods on specialized hardware

INTEGRATION TIME
APPLICATIONS
APPLICATIONS TO OPTIMIZATION OF INTEGRAL FUNCTIONS
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call