This is a preview and has not been published.

Accelerated Iterated Filtering


  • Dao Nguyen University of Mississippi


Simulation-based inferences have attracted much attention in recent years, as the direct computation of the likelihood function in many real-world problems is difficult or even impossible. Iterated filtering (Ionides, Bretó, and King 2006; Ionides, Bhadra, Atchadé,
and King 2011) enables maximization of likelihood function via model perturbations and approximation of the gradient of loglikelihood through sequential Monte Carlo filtering. By an application of Stein’s identity, Doucet, Jacob, and Rubenthaler (2013) developed a
second-order approximation of the gradient of log-likelihood using sequential Monte Carlo smoothing. Based on these gradient approximations, we develop a new algorithm for maximizing the likelihood using the Nesterov accelerated gradient. We adopt the accelerated inexact gradient algorithm (Ghadimi and Lan 2016) to iterated filtering framework, relaxing the unbiased gradient approximation condition. We devise a perturbation policy for iterated filtering, allowing the new algorithm to converge at an optimal rate for both concave and non-concave log-likelihood functions. It is comparable to the recently developed Bayes map iterated filtering approach and outperforms the original iterated filtering approach.


How to Cite

Nguyen, D. Accelerated Iterated Filtering. Austrian Journal of Statistics, 52(4), 117–142. Retrieved from