Abstract

Simulation-based optimization using agent-based models is typically carried out under the assumption that the gradient describing the sensitivity of the simulation output to the input cannot be evaluated directly. To still apply gradient-based optimization methods, which efficiently steer the optimization towards a local optimum, gradient estimation methods can be employed. However, many simulation runs are needed to obtain accurate estimates if the input dimension is large. Automatic differentiation (AD) is a family of techniques to compute gradients of general programs directly. Here, we explore the use of AD in the context of time-driven agent-based simulations. By substituting common discrete model elements such as conditional branching with smooth approximations, we obtain gradient information across discontinuities in the model logic. On the examples of a synthetic grid-based model, an epidemics model, and a microscopic traffic model, we study the fidelity and overhead of the differentiable simulations as well as the convergence speed and solution quality achieved by gradient-based optimization compared with gradient-free methods. In traffic signal timing optimization problems with high input dimension, the gradient-based methods exhibit substantially superior performance. A further increase in optimization progress is achieved by combining gradient-free and gradient-based methods. We demonstrate that the approach enables gradient-based training of neural network-controlled simulation entities embedded in the model logic. Finally, we show that the performance overhead of differentiable agent-based simulations can be reduced substantially by exploiting sparsity in the model logic.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call