Abstract

In this paper a modeling technique enabling the fast analysis of antenna arrangements built-up from metallic parts is presented in detail. A calculation method which accelerates the solution of integral equations discretized by the Method of Moments have been implemented in a massively parallel computing environment, a general purpose video card (GPU). Rao-Wilton-Glisson edge basis functions are used to expand surface currents, while radiated field is computed using the dipole model. Since the entire computation takes place on the GPU exclusively, the overall computational time could be significantly reduced. To obtain such an acceleration, a novel computational technique is proposed for the filling of the impedance matrix, which takes full advantage of the given platform. Analysis of antenna structures can be made very efficiently using the frequency domain (FD) integral equationformulation, discretizedby the Methodof Moments (MoM) (1). The FD-MoM implementation employing surface patch modeling (2) of the (infinitesimally thin) metal surfaces lacks the incorporation of the surrounding air. This is a major advantage over Finite Element Method (FEM), where the surrounding air is part of the model and therefore must be discretized as well. As the computational complexity (demand) is proportional to the number of discrete geometrical elements constituting the model, in the latter case the Degrees of Freedom (DoF) of the resulting linear equation system to be finally solved is significantly higher than in the preceding one. Furthermore, as the correct behavior for the radiation condition is automatically incorporated in the moment method, there is no need for special termination -by e.g., a Perfectly Matched Layer (PML) or Absorbing Boundary Condition (ABC)- of the problem domain, as in FEM. Unfortunately, the advantagesof MoM are spoiled by its well-known high demands for computational resources in terms of memory capacity and CPU time. Although spectacular reduction techniques like the so-called multi-level fast multiple Method (MLFMM) exist, these can not be applied to all practical problems, hence often the tedious standard MoM formulation is to be utilized. In this paper the acceleration of the standard MoM method is carried out using a massively parallel computing environment, the GPU. Although several similar works have already been published (3,4),

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call