The 3D reconstruction is generally defined as the process of capturing the shape and appearance of real objects. By reconstructing 3D digital model from a series of 2D slices, it brings considerable convenience to visualize internal structure and decipher structure-property relation of a material. Nowadays, the 3D reconstruction is becoming a cutting-edge technique in depicting the internal structure and evaluating the physical performance of targeted materials. Recent years, generative machine learning methods, such as generative adversarial networks (GAN), have achieved tremendous success in AI-generated physical content (AIGPC). However, lots of technical challenges remain, including oversimplified models, oversized dataset requirement and inefficient convergence. These difficulties are caused by the insufficient ability to capture detailed features and the inadequacy of the generated model quality. To this end, a novel generative model is developed, which combines the multiscale features of U-net and the synthesis ability of GANs. With the help of the multiscale channel aggregation module, the hierarchical feature aggregation module and the convolutional block attention module, our model is developed to capture the features of the material microstructure better. The loss function is refined by combining the image regularization loss with the Wasserstein distance loss. In addition, the anisotropy index is adopted to measure anisotropic degree of selected samples quantitively. The results demonstrate that the 3D structures generated by the proposed model retain high fidelity with ground truth samples. With remarkable performance, the proposed model not only overtakes traditional GAN, but will shed a brilliant light on AIGPC and physical 3D reconstruction.