Abstract

A decomposition technique known as node tearing is used to perform circuit simulation on a multiprocessor. The effect of partitioning the circuit on the final speedup achieved is considered. Using a very simple model for the LU decomposition time of sparse matrices, a circuit-partitioning problem based on node tearing is formulated to maximize speedup on a multiprocessor. An abstract hypergraph partitioning problem is then posed along with an algorithm for its solution. The original circuit partitioning problem is then transformed into an equivalent hypergraph partitioning problem, thereby generating partitions for the circuit. The effect of the tradeoff of circuit-partitioning time versus the number of available processors on the speedup factor is also studied. >

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call