Abstract

Publisher Summary This chapter discusses an efficient algorithm for model reduction of large-scale unstable systems on parallel computers. The major computational step involves the additive decomposition of a transfer function via a block diagonalization. The actual model reduction is then achieved by reducing the stable part, using techniques based on state-space truncation. All core computational steps are based on the sign function method. Numerical experiments on a cluster of Intel Pentium-IV processors show the efficiency of methods. Methods based on truncated state-space transformations usually differ in the measurement of the approximation error and the way they attempt to minimize this error. Balanced truncation (BT) methods, singular perturbation approximation (SPA) methods, and optimal Hankel-norm approximation (HNA) methods all belong to the family of absolute error method. The chapter discusses the BT method used to reduce the stable part of the system. All absolute error model reduction is also based on sign function computations and use low-rank factorizations of the system Gramians. Sign function-based methods allow very efficient and scalable implementations of the proposed algorithms. The implementation of the suggested procedure for model reduction of unstable systems is briefly discussed. The numerical examples reported on a cluster of Intel Pentium-IV processors reveal the performance of the parallel algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call