Abstract

The numerical solution of the dense linear complex valued system of equations generated by the method of moments (MoMs) generally proceeds by factoring the impedance matrix into LU decomposition. Depending on available hardware resources, the LU algorithm can be executed either on sequential or parallel computers. A straightforward parallel implementation of LU factorisation does not yield a well distributed workload, and therefore it is the computationally most expensive step of the MoMs process, especially when adapting to the GPU technology. Some performance improvement of LU decomposition can be achieved by applying a hybrid approach to the parallel processing model. In this reported work, the problem of accelerating an out-of-core-like LU solver on a heterogeneous low-cost single GPU/CPU computing platform is addressed. For this, a variable panel-width tuning scheme combined with a hybrid panel-based LU decomposition method is employed, which is something of a novelty in the development of dense linear algebra software. To demonstrate the efficiency of the proposed approach some numerical results are provided.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call