Localization of vehicles in a 3D environment is a basic task in autonomous driving. In the low-light environments, it is difficult to navigate independently using a visual odometry for autonomous driving. The main reason for this challenge is the blurred images in the scenes with insufficient illumination. Although numerous works focused on this issue, it still has a number of inherent drawbacks. In this paper, we propose a lightweight stereo visual odometry system for navigation of autonomous vehicles in low-light situations. Contrary to the existing recovery methods, we aim to divide the captured image into the illumination image as well as the reflectance image and only estimate the illumination one, where the enhanced map of the low-light image is acquired by using the retinex theory. In addition, we further utilize a simplified and rapid feature detection scheme, which reduces the computation time by about 85% but maintaining the matching accuracy similar to that of ORB features. Finally, the experiments show that our average memory consumption of our proposed method is much less than the conventional algorithm.