Abstract

Vision-guided coal mine robots often encounter challenges in low-light environments, capturing images marked by poor visibility and significant loss of detail, which complicates the advancement of smart mine safety production. Despite various advancements, achieving high visibility and detailed images through robotic vision sensors remains a formidable task. This paper introduces a conditional generative model incorporating a skip-connection structure to address these issues. We present a novel approach utilizing conditional Generative Adversarial Networks (GNAs) aimed at improving the visibility of image content. This involves an encoder-decoder framework with skip-connections functioning as the generator, and the discriminator, both tailored to enhance image detail. In addition, we implement loss functions integrated with boundary equilibrium constraints to counter the issue of model collapse in conditional GANs, thereby enhancing training stability and image fidelity. Our experimental findings demonstrate that our model delivers competitive results against other state-of-the-art low-light image enhancement models, alongside a standardized training regimen that ensures rapid and consistent convergence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call