Real-time accurate localization is a key component of any autonomous mobile robot. Visual localization algorithms usually rely on feature matching between the current view and a map using point descriptors. Many descriptors such as SIFT or SURF are designed to recognize features seen from different viewpoints, but in robotics context, robot movement can be modeled to bring useful information for the matching problem. Here we detail a feature-matching solution using a local 3D model of the features that exploits the motion model of the robot. We compare our method against the SIFT descriptor in a simple matching experiment. The method is then combined with prediction models to achieve autonomous navigation of a mobile robot. Experiments showed that localization remains possible despite severe viewpoint change.