Abstract

SummaryWe introduce a new online algorithm for expected loglikelihood maximization in situations where the objective function is multimodal or has saddle points. The key element underpinning the algorithm is a probability distribution that concentrates on the target parameter value as the sample size increases and can be efficiently estimated by means of a standard particle filter algorithm. This distribution depends on a learning rate, such that the faster the learning rate the quicker the distribution concentrates on the desired element of the search space, but the less likely the algorithm is to escape from a local optimum of the objective function. In order to achieve a fast convergence rate with a slow learning rate, our algorithm exploits the acceleration property of averaging, which is well known from the stochastic gradient literature. Considering several challenging estimation problems, our numerical experiments show that with high probability, the algorithm successfully finds the highest mode of the objective function and converges to the global maximizer at the optimal rate. While the focus of this work is expected loglikelihood maximization, the proposed methodology and its theory apply more generally to optimization of a function defined through an expectation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.