Abstract

Visual simultaneous localization and mapping (SLAM) is an emerging technology that enables low-power devices with a single camera to perform robotic navigation. However, most visual SLAM algorithms are tuned for images produced through the image sensor processing (ISP) pipeline optimized for highly aesthetic photography. In this paper, we investigate the feasibility of varying sensor quantization on RAW images directly from the sensor to save energy for visual SLAM. In particular, we compare linear and logarithmic image quantization and show visual SLAM is robust to the latter. Further, we introduce a new gradient-based image quantization scheme that outperforms logarithmic quantization’s energy savings while preserving accuracy for feature-based visual SLAM algorithms. This work opens a new direction in energy-efficient image sensing for SLAM in the future.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call