In the last decade, the exploration of deep-sea ecosystems has surged, offering exciting prospects for discovering untapped resources such as medical drugs, food and energy sources, and renewable energy products. Consequently, research in underwater image processing has witnessed substantial growth. However, underwater imaging poses significant challenges, particularly without sophisticated, specialized cameras. Traditional cameras are impacted by absorption and scattering in the aquatic environment, producing hazy images with a blue–green tint. This phenomenon holds implications for marine research and other disciplines that rely on underwater imaging. While hardware advancements have been made over the years, image processing remains a valuable, cost-effective, and practical approach for underwater enhancement. Despite the existence of state-of-the-art techniques for underwater enhancement and restoration, their performance is often inconsistent. While some methods excel in contrast restoration, color restoration remains a pervasive challenge. In this paper, we introduce Sea-Pix-GAN, a Generative Adversarial Network (GAN)-based model that addresses these issues in underwater image enhancement. We redefine the problem as an image-to-image translation task and tailor the objective and loss functions to achieve color, content, and style transfer. The model is trained on a large dataset of underwater scenes, encompassing the diverse color dynamics of underwater subjects. Sea-Pix-GAN demonstrates promising results in restoring color, contrast, texture, and saturation. To validate its effectiveness, we compare the performance of Sea-Pix-GAN quantitatively based on metrics like PSNR, SSIM, and UIQM and qualitatively against several existing techniques.