Abstract

In this article, we establish a cluster-based gradient method (CGM) by combining K-means clustering algorithm and stochastic gradient descent (SGD). By clustering the sampling solutions, we use the cluster centroids to represent sampling data and give an estimate to the full gradient. It is well known that the full gradient descent (FGD) can provide the steepest descent direction for finding a local minimum of the desired stochastic control problems. However, the huge computational requirements, which is proportional to the product of sample size and the numerical cost for each sample, often makes FGD cost prohibitive for large scale optimization problems. To reduce the formidable cost and the risks of getting stuck in a local minimum, SGD is proposed and can be regarded as a stochastic approximation of FGD. This, however, would result in a slow convergence due to the incorrect approximation of the iteratively update parameters. Our study shows that CGM could provide a good stochastic approximation to the full gradient with small sample size while has a more stable and faster convergence than SGD. To verify our algorithm, a stochastic elliptic control problem is selected and tested. The numerical results validate our method as a reliable gradient descent method with great potential applications in optimization problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call