Abstract

In this paper, we propose an iterative method based on reduced-space approximations for unconstrained optimization problems. The method works as follows: among iterations, samples are taken about the current solution by using, for instance, a Normal distribution; for all samples, gradients are computed (approximated) in order to build reduced-spaces onto which descent directions of cost functions are estimated. By using such directions, intermediate solutions are updated. The overall process is repeated until some stopping criterion is satisfied. The convergence of the proposed method is theoretically proven by using classic assumptions in the line search context. Experimental tests are performed by using well-known benchmark optimization problems and a non-linear data assimilation problem. The results reveal that, as the number of sample points increase, gradient norms go faster towards zero and even more, in the data assimilation context, error norms are decreased by several order of magnitudes with regard to prior errors when the assimilation step is performed by means of the proposed formulation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call