Image inpainting aims to recover damaged regions of a corrupted image and maintain the integrity of the structure and texture within the filled regions. Previous popular approaches have restored images with both vivid textures and structures by introducing structure priors. However, the structure prior-based approaches meet the following main challenges: (1) the fine-grained textures suffer from adverse inpainting effects because they do not fully consider the interaction between structures and textures, (2) the features of the multi-scale objects in structural and textural information cannot be extracted correctly due to the limited receptive fields in convolution operation. In this paper, we propose a texture and structure bidirectional generation network (TSBGNet) to address the above issues. We first reconstruct the texture and structure of corrupted images; then, we design a texture-enhanced-FCMSPCNN (TE-FCMSPCNN) to optimize the generated textures. We also conjoin a bidirectional information flow (BIF) module and a detail enhancement (DE) module to integrate texture and structure features globally. Additionally, we derive a multi-scale attentional feature fusion (MAFF) module to fuse multi-scale features. Experimental results demonstrate that TSBGNet effectively reconstructs realistic contents and significantly outperforms other state-of-the-art approaches on three popular datasets. Moreover, the proposed approach yields promising results on the Dunhuang Mogao Grottoes Mural dataset.