Abstract

A major part of the design process for Integrated Circuits (IC) is the process of circuit verification, in which the correctness of a circuit's design is evaluated. Discrete event simulation is a central tool in this effort. As proscribed by Moore's law, the number of transistors which can be placed on an IC doubles every 18 months. As a result, simulation has become the major bottleneck in the circuit design process. To alleviate this difficulty, it is possible to make use of parallel (or distributed) circuit simulation. In this paper, we make use of a parallel gate-level simulator which we developed and which is based upon Time Warp. Gate-level simulations exhibit two characteristics which can easily result in either instability or severely degraded simulation performance. Because of the low computational granularity of a gate-level simulation and because the computational load varies throughout the course of the simulation, the performance of Time Warp can be either severely degraded or be unstable. Restraining the optimism of Time Warp via a bounded window and utilizing dynamic load balancing are approaches to deal with these difficulties. In this paper, we make use of learning techniques from artificial intelligence (multiagent Q-learning, simulated annealing) to develop a combined bounded window and dynamic load balancing algorithm for parallel digital logic simulation. We evaluated the performance of these algorithms on open source Sparc and Leon designs and on two Viterbi decoder designs. We observed up to 60 percent improvement in simulation time of one of the decoders using this approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call