With the widespread adoption of mobile multimedia devices, the deployment of compute-intensive inference tasks on edge and resource-constrained devices, particularly in the context of low-light text detection, remains a formidable challenge. Existing deep learning approaches have shown limited effectiveness in restoring images for extremely dark scenes. To address these limitations, this paper presents a novel cloud-based Low-light Attention Enhancement Generative Adversarial Network for unpaired text images (LAE-GAN) for the non-paired text image enhancement task in extremely low-light conditions. In the first stage, compressed low-light images are transmitted from edge devices to a cloud server for image enhancement. The LAE-GAN, an end-to-end network comprising a Zero-DCE and AGM-net generator, is designed with a global and local discriminator structure. The initial illumination restoration of extremely low-light images is accomplished using the Zero-DCE network. To enhance text details, we propose an Enhanced Text Attention Mechanism (ETAM) that transforms text information into a comprehensive text attention mechanism across the entire network. The Sobel operator is employed to extract text edge information, while attention is focused on text region details through constraints imposed on the attention map and edge map. Additionally, an AGM-Net module is integrated to reduce noise and fine-tune illumination. In the second stage, the cloud server makes decisions based on user requirements and processes requests in parallel, scaling with the quantity of requests. In the third stage, the enhanced results are transmitted back to edge devices for text detection. Experimental results on widely used LOL and SID low-light datasets demonstrate significant improvements in both quantitative and qualitative analysis, surpassing state-of-the-art enhancement methods in terms of image restoration and text detection.