Abstract
Stochastic gradient descent algorithm is a classical and useful method for stochastic optimisation. While stochastic gradient descent has been theoretically investigated for decades and successfully applied in machine learning such as training of deep neural networks, it essentially relies on obtaining the unbiased estimates of gradients/subgradients of the objective functions. In this paper, by constructing the randomised differences of the objective function, a gradient-free algorithm, named the stochastic randomised-difference descent algorithm, is proposed for stochastic convex optimisation. Under the strongly convex assumption of the objective function, it is proved that the estimates generated from stochastic randomised-difference descent converge to the optimal value with probability one, and the convergence rates of both the mean square error of estimates and the regret functions are established. Finally, some numerical examples are prsented.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.