Two key challenges exist in high dynamic range (HDR) imaging from multiexposure low dynamic range (LDR) images for dynamic scenes: 1) aligning the input images with large-scale foreground motions and 2) recovering large saturated regions from a limited number of input LDR images. To tackle these challenges, several deep convolutional neural networks have been proposed that have made significant progress. However, these methods tend to suffer from ghosting and saturation artifacts when applied to some challenging scenes. In this article, we propose an end-to-end deformable HDR imaging network, called DHDRNet, which attempts to alleviate these problems by building an effective aligning module and adopting self-guided attention. First, we analyze the alignment process in the HDR imaging task and correspondingly design a pyramidal deformable module (PDM) that aligns LDR images at multiple scales and reconstructs aligned features in a coarse-to-fine manner. In this way, the proposed DHDRNet can handle large-scale complex motions and suppress ghosting artifacts caused by misalignments. Moreover, we adopt self-guided attention to reduce the influence of saturated regions during the aligning and merging processes, which helps suppress artifacts and retain fine details in the final HDR image. Extensive qualitative and quantitative comparisons demonstrate that the proposed model outperforms the existing start-of-the-art methods and that it is robust to challenging scenes with large-scale motions and severe saturation. The source code is available at: <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/Tx000/DHDRNet</uri> .