Particle-based Variational Inference (ParVI) methods have been widely adopted in deep Bayesian inference tasks such as Bayesian neural networks or Gaussian Processes, owing to their efficiency in generating high-quality samples given the score of the target distribution. Typically, ParVI methods evolve a weighted-particle system by approximating the first-order Wasserstein gradient flow to reduce the dissimilarity between the particle system's empirical distribution and the target distribution. Recent advancements in ParVI have explored sophisticated gradient flows to obtain refined particle systems with either accelerated position updates or dynamic weight adjustments. In this paper, we introduce the semi-Hamiltonian gradient flow on a novel Information-Fisher-Rao space, known as the SHIFR flow, and propose the first ParVI framework that possesses both accelerated position update and dynamical weight adjustment simultaneously, named the General Accelerated Dynamic-Weight Particle-based Variational Inference (GAD-PVI) framework. GAD-PVI is compatible with different dissimilarities between the empirical distribution and the target distribution, as well as different approximation approaches to gradient flow. Moreover, when the appropriate dissimilarity is selected, GAD-PVI is also suitable for obtaining high-quality samples even when analytical scores cannot be obtained. Experiments conducted under both the score-based tasks and sample-based tasks demonstrate the faster convergence and reduced approximation error of GAD-PVI methods over the state-of-the-art.
Read full abstract