Designing metasurfaces is a challenging task. Traditional methodologies, which primarily depend on iterative procedures, are both time-intensive and require specialized expertise. The proposed algorithm uses conditional deep convolutional generative adversarial networks (cDCGAN) to design metasurfaces. This method instantly create a 2D image of a multi-layer metasurface using the scattering parameter S11 as the input vector. The algorithm significantly reduces the size of the training dataset by applying pre-training and post-generating steps. The pre-training step involves aliasing and modifying images using a limited color palette. The post-generating step consists of separating the color channels, converting the pixels to vector based images, and fine-tuning the borders. The algorithm is evaluated for three metasurfaces that have unique features compared to the training dataset samples: a single-band metasurface unitcell, a dual-band metasurface unitcell, and a partially trained sample improved by magnetic field analysis. The results show that the proposed algorithm can accurately predict the images of these metasurface unitcells, demonstrating its potential for fast and efficient metasurface design.
Read full abstract