Two sub-pixel resolution approaches to measure in-plane displacements and in-plane rotation of a known target, through image processing, are described in this research. A dynamic known target is displayed on a pixel grid, which is attached to one end of the kinematic chain of an XY? Z stage; the latter represents the experimental testbed. At the other end of the kinematic chain, a digital monochrome camera is fixed to the bottom of the stage and provides 3D position information used as the feedback signal to the vision-based control system in charge of the tool's motion. The illuminated pixels on the display are captured in real time by the digital camera, and the stage motion control system attempts to keep the displayed image in the proper location with respect to the camera image plane. The result is a direct sensing multi-DOF position feedback system. The proposed camera-pixel grid sensing setup eliminates the reliance on the kinematic model and also avoids the need for traditional error compensation techniques, along with their associated cost and complexity. Positioning resolutions on the order of 1/100th of the pixel size on the display are achieved.