The inverse tone mapping (iTM) technique that produces a high dynamic range (HDR) image from one single standard dynamic range (SDR) image has received much attention in industry and academia recently. However, existing methods to recover HDR images mainly focus on overexposed regions but ignore underexposed regions. The underexposed regions in an image are susceptible to noise and artifacts, which will reduce the quality of the image and the user’s visual experience. Therefore, in this paper, we propose a brightness-adaptive iTM model based on deep learning to focus on the content restoration of both the overexposed and underexposed regions in an SDR image simultaneously. In this model, instead of directly predicting the HDR output, we adopt an encoder-decoder network to predict spatially adaptive kernels, which further convolute the input SDR image to produce the HDR result. With the spatially adaptive kernels, the input regions with different exposures can be adaptively mapped by making full use of neighborhood information. Importantly, brightness-adaptive skip connections in the encoder-decoder network, as well as a region loss, are designed to force the proposed model to attach importance to overexposed and underexposed regions. Besides, a global branch is employed in our encoder to exploit both the global and local brightness. Extensive qualitative and quantitative experiments demonstrate that the proposed approach outperforms recent methods on multiple metrics.