- Research Article
- 10.3934/fods.2024026
- Jan 1, 2025
- Foundations of Data Science
- Harbir Antil + 1 more
- Research Article
- 10.3934/fods.2025005
- Jan 1, 2025
- Foundations of Data Science
- J Wilson Peoples + 1 more
- Research Article
17
- 10.3934/fods.2023014
- Jan 1, 2025
- Foundations of Data Science
- Xiongjie Chen + 1 more
By approximating posterior distributions with weighted samples, particle filters (PFs) provide an efficient mechanism for solving non-linear sequential state estimation problems. While the effectiveness of particle filters has been recognised in various applications, their performance relies on the knowledge of dynamic models and measurement models, as well as the construction of effective proposal distributions. An emerging trend involves constructing components of particle filters using neural networks and optimising them by gradient descent, and such data-adaptive particle filtering approaches are often called differentiable particle filters. Due to the expressiveness of neural networks, differentiable particle filters are a promising computational tool for performing inference on sequential data in complex, high-dimensional tasks, such as vision-based robot localisation. In this paper, we review recent advances in differentiable particle filters and their applications. We place special emphasis on different design choices for key components of differentiable particle filters, including dynamic models, measurement models, proposal distributions, optimisation objectives, and differentiable resampling techniques.
- Research Article
10
- 10.3934/fods.2024043
- Jan 1, 2025
- Foundations of Data Science
- Benjamin Sanderse + 3 more
Here, closure problems are omnipresent when simulating multiscale systems, where some quantities and processes cannot be fully prescribed despite their effects on the simulation's accuracy. Recently, scientific machine learning approaches have been proposed as a way to tackle the closure problem, combining traditional (physics-based) modeling with data-driven (machine-learned) techniques, typically through enriching differential equations with neural networks. This paper reviews the different reduced model forms, distinguished by the degree to which they include known physics, and the different objectives of a priori and a posteriori learning. The importance of adhering to physical laws (such as symmetries and conservation laws) in choosing the reduced model form and choosing the learning method is discussed. The effect of spatial and temporal discretization and recent trends toward discretization-invariant models are reviewed. In addition, we make the connections between closure problems and several other research disciplines: inverse problems, Mori-Zwanzig theory, and multi-fidelity methods. In conclusion, much progress has been made with scientific machine learning approaches for solving closure problems, but many challenges remain. In particular, the generalizability and interpretability of learned models is a major issue that needs to be addressed further.
- Research Article
1
- 10.3934/fods.2024050
- Jan 1, 2025
- Foundations of Data Science
- Phillip Kearns + 2 more
- Research Article
- 10.3934/fods.2024005
- Jan 1, 2025
- Foundations of Data Science
- Iñigo Urteaga + 1 more
We extend Bayesian multi-armed bandit (MAB) algorithms beyond their original setting by making use of sequential Monte Carlo (SMC) methods. A MAB is a sequential decision making problem where the goal is to learn a policy that maximizes long term payoff, where only the reward of the executed action is observed. In the stochastic MAB, the reward for each action is generated from an unknown distribution, often assumed to be stationary. To decide which action to take next, a MAB agent must learn the characteristics of the unknown reward distribution, e.g., compute its sufficient statistics. However, closed-form expressions for these statistics are analytically intractable except for simple, stationary cases. We here utilize SMC for estimation of the statistics Bayesian MAB agents compute, and devise flexible policies that can address a rich class of bandit problems: i.e., MABs with nonlinear, stateless-and context-dependent reward distributions that evolve over time. We showcase how non-stationary bandits, where time dynamics are modeled via linear dynamical systems, can be successfully addressed by SMC-based Bayesian bandit agents. We empirically demonstrate good regret performance of the proposed SMC-based bandit policies in several MAB scenarios that have remained elusive, i.e., in non-stationary bandits with nonlinear rewards.
- Research Article
3
- 10.3934/fods.2025011
- Jan 1, 2025
- Foundations of Data Science
- Jian Liu + 2 more
- Research Article
5
- 10.3934/fods.2024029
- Jan 1, 2025
- Foundations of Data Science
- Amanda A Howard + 3 more
- Research Article
1
- 10.3934/fods.2024022
- Jan 1, 2025
- Foundations of Data Science
- Piermario Vitullo + 2 more
- Research Article
- 10.3934/fods.2024031
- Jan 1, 2025
- Foundations of Data Science
- Pavel Bochev + 1 more