In this paper, we present two methods for obtaining visual odometry (VO) estimates using a scanning laser rangefinder. Although common VO implementations utilize stereo camera imagery, passive cameras are dependent on ambient light. In contrast, actively illuminated sensors such as laser rangefinders work in a variety of lighting conditions, including full darkness. We leverage previous successes by applying sparse appearance‐based methods to laser intensity images, and we address the issue of motion distortion by considering the timestamps of the interest points detected in each image. To account for the unique timestamps, we introduce two estimator formulations. In the first method, we extend the conventional discrete‐time batch estimation formulation by introducing a novel frame‐to‐frame linear interpolation scheme, and in the second method, we consider the estimation problem by starting with a continuous‐time process model. This is facilitated by Gaussian process Gauss‐Newton (GPGN), an algorithm for nonparametric, continuous‐time, nonlinear, batch state estimation. Both laser‐based VO methods are compared and validated using datasets obtained by two experimental configurations. These datasets consist of 11 km of field data gathered by a high‐frame‐rate scanning lidar and a 365 m traverse using a sweeping planar laser rangefinder. Statistical analysis shows a 5.3% average translation error as a percentage of distance traveled for linear interpolation and 4.4% for GPGN in the high‐frame‐rate scenario.