• The image-level and feature-level global structure information guides image outpainting • The multi-level dilated convolution block markedly expands receptive field of network • The patch-GAN discriminates patches of outpainting images • Both perceptual loss and style loss improve the texture and style of outpainting images Deep learning-based image outpainting infers the missing region by using the known parts of images. However, due to lack of fully exploring the known image structure or texture information, most existing methods always produce blurry contents and distorted structures. To generate more natural outpainting results, we propose a two-stage outpainting method guided by prior structure information. It consists of both structure outpainting and texture outpainting, which allows the model to complete structure outpainting and refines the generated image. In Stage-I, we build the structure outpainting network to infer the structural information of the missing regions by utilizing that of the known structure. This could fully explore the global structure information and produce complete structure images. Stage-II builds upon Stage-I results and utilizes both inferred image-level and aggregated multi-scale feature-level structure information to refine results with more authentic and natural texture. Moreover, a multi-level dilated convolution block is presented to significantly enlarge the receptive field of texture outpainting, promoting it to extract more useful feature information for producing finer texture. Compared with the existing state-of-the-art (SOTA) methods, the experimental results on both Places2 and Paris StreetView datasets illustrate that our method exhibits better in terms of qualitative and quantitative comparisons.