The deep rectangling task aims to transform edge-irregular stitched images into standardised rectangular formats using deep learning. Existing deep rectangling solutions rely on mesh-based segmentation and deformation, which introduce global distortions detrimental to subsequent analysis and processing. Therefore, we propose RectanglingGAN to handle missing regions in a distortion-free manner via image inpainting. Unlike generic inpainting models, RectanglingGAN focuses on preserving the quality of edge regions. In particular, we present an adaptive distance transform module to improve the attention of the feature maps to the mask boundaries. The module dynamically selects the appropriate distance transformation scheme according to the image content. Moreover, we introduce the distance-weighted decaying reconstruction loss into the pixel-wise reconstruction, which introduces the spatial location information between the inpainting pixels and the mask boundaries by inverting the distance-transformed mask. By incorporating this loss, the model pays more attention to the quality of generation pixels far from the mask boundary. Extensive experiments validate the effectiveness of RectanglingGAN compared to state-of-the-art methods, highlighting its significant advantages in terms of both the quality and the fidelity of the edge region in generation images.