Abstract

Markov population models are a widespread formalism used to model the dynamics of complex systems, with applications in systems biology and many other fields. The associated Markov stochastic process in continuous time is often analyzed by simulation, which can be costly for large or stiff systems, particularly when a massive number of simulations has to be performed, e.g. in a multi-scale model. A strategy to reduce computational load is to abstract the population model, replacing it with a simpler stochastic model, faster to simulate. Here we pursue this idea, exploring and comparing state-of-the-art generative models, which are flexible enough to automatically learn distributions over entire trajectories, rather than single simulation steps, from observed realizations of the system. In particular, we compare a Generative Adversarial setting with a Score-based Diffusion approach and show how the latter outperforms the former both in terms of accuracy and stability at the cost of slightly higher simulation times. To improve the accuracy of abstract samples, we develop an active learning framework to enrich our dataset with observations whose expected satisfaction of a temporal requirement differs significantly from the abstract one. We experimentally show how the proposed abstractions are well suited to work on multi-scale and data-driven scenarios, meaning that we can infer a (black-box) dynamical model from a pool of real data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call