Remote photoplethysmography (rPPG) has gained significant attention as a non-invasive approach for measuring human vital signs from videos. However, the reliance on capturing the reflection of ambient lighting causes it to be susceptible to the adverse effects of low illumination levels, leading to deterioration in the signal quality. The preliminary study introduces a novel methodology for enhancing rPPG signal extraction in low-light conditions through the integration of an Image Enhancement Model (IEM) inspired by Retinex theory, which significantly improved signal quality by preprocessing video frames to better capture the subtle changes in facial blood flow. Recognizing the challenge posed by the generalization capacity over different deep-learning models and unseen examples from various datasets, this study further evaluated the efficacy of the IEM + rPPG extraction model across multiple datasets (UBFC-rPPG, BH-rPPG, MMPD-rPPG, and VIPL-HR) by integrating IEM with existing deep-learning based rPPG extraction models including DeepPhys, PhysNet, and PhysFormer and traditional POS rPPG extraction method. Our experiments demonstrate a consistent improvement for all the methods in heart rate estimation accuracy across all datasets, underscoring the IEM’s adaptability and effectiveness. The paper also explores the application of our method to traditional rPPG extraction techniques, further validating its potential for broader usage. Through comprehensive analysis, this research not only confirms the impact of lighting conditions on rPPG signal quality but also provides a robust solution for more reliable non-invasive vital sign monitoring in diverse environments.