Abstract

Global virtual time (GVT) is used in distributed simulations to reclaim memory, commit output, detect termination, and handle errors. It is a global function that is computed many times during the course of a simulation. A small GVT latency (delay between its occurrence and detection) allows for more efficient use of resources. We present an algorithm which minimizes the latency, and we prove its correctness. The algorithm is unique in that a target virtual time (TVT) is predetermined by an initiator who then detects when GVT ≥ TVT. This approach eliminates the avalanche effect because the collection phase is spread out over time, and it allows for regular and timely GVT updates. The algorithm does not require messages to be acknowledged, which significantly reduces the message overhead of the simulation. One possible application is with interactive simulators, where regular and timely updates would produce output that is up to date and appears smooth.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call