Abstract

The goal of this paper is to model hypothesis testing. A “real situation” is given in the form of a response surface, which is defined by a derivative-free continuous expensive objective function. An ideal hypothesis should correspond to a global minimum of this function. Thus, hypothesis testing is converted into optimization of a response surface. First, an objective function is evaluated at a few points. Then, the hypothetical (surrogate) surface landscape is created from an ensemble of approximations of the objective function. Approximations result from neural networks, which use already evaluated samples as the training set. The hypothesis landscape adapted by a merit function estimates a possibility of getting at a given point a better value, than is the currently achieved value from already evaluated points. The most promising point (a minimum of the adapted function) is used as the next sample point for the true expensive objective function. Its value is then used to adapt neural networks, creating a new hypothesis landscape. The results suggest that (1) in order to get a global minimum, it may be useful to have an estimation of the whole response surface, and therefore to explore also those points, where maxima are predicted, and (2) an assembly of modules predicting the next sample point from the same set of sample points can be more advantageous than a single neural network predictor.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call