Due to the complexity of underwater environments, acquiring high-quality paired underwater images poses a significant challenge. Water’s absorption and scattering of light often result in images with low contrast, color deviations, and blurred details. To address these challenges, this paper proposes an improved unsupervised learning model based on CycleGAN. This model uses a two-part generator to separate content and style features from underwater images. The model integrates content and style features through a multi-scale fusion module, then uses a decoder to reconstruct them into clear images, enhancing image quality with style transfer techniques. Our experiments show that our algorithm performs better than other advanced models in terms of PSNR and SSIM indices, respectively. It can also produce good-quality enhanced images. Furthermore, feature point matching experiments were conducted to demonstrate the practicality of our model.