Images captured under low-light conditions suffer from several combined degradation factors, including low brightness, low contrast, noise, and color bias. Many learning-based techniques attempt to learn the low-to-clear mapping between low-light and normal-light images. However, they often fall short when applied to low-light images taken in wide-contrast scenes because uneven illumination brings illumination-varying noise and the enhanced images are easily over-saturated in highlight areas. In this paper, we present a novel two-stage method to tackle the problem of uneven illumination distribution in low-light images. Under the assumption that noise varies with illumination, we design an illumination-aware transformer network for the first stage of image restoration. In this stage, we introduce the Illumination-Aware Attention Block featured with Illumination-Aware Multi-Head Self-Attention, which incorporates different scales of illumination features to guide the attention module, thereby enhancing the denoising and reconstruction capabilities of the restoration network. In the second stage, we innovatively introduce a cubic auto-knee curve transfer with a global parameter predictor to alleviate the over-exposure caused by uneven illumination. We also adopt a white balance correction module to address color bias issues at this stage. Extensive experiments on various benchmarks demonstrate the advantages of our method over state-of-the-art methods qualitatively and quantitatively.