Abstract

Simulation-based inferences have attracted much attention in recent years, as the direct computation of the likelihood function in many real-world problems is difficult or even impossible. Iterated filtering (Ionides, Bretó, and King 2006; Ionides, Bhadra, Atchadé,and King 2011) enables maximization of likelihood function via model perturbations and approximation of the gradient of loglikelihood through sequential Monte Carlo filtering. By an application of Stein’s identity, Doucet, Jacob, and Rubenthaler (2013) developed asecond-order approximation of the gradient of log-likelihood using sequential Monte Carlo smoothing. Based on these gradient approximations, we develop a new algorithm for maximizing the likelihood using the Nesterov accelerated gradient. We adopt the accelerated inexact gradient algorithm (Ghadimi and Lan 2016) to iterated filtering framework, relaxing the unbiased gradient approximation condition. We devise a perturbation policy for iterated filtering, allowing the new algorithm to converge at an optimal rate for both concave and non-concave log-likelihood functions. It is comparable to the recently developed Bayes map iterated filtering approach and outperforms the original iterated filtering approach.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.