A vision-based guidance methodology is proposed for precise positioning of the tool center point (TCP) of heavy-duty, long-reach (HDLR) manipulators. HDLR manipulators are non-rigid structures with many nonlinearities. Therefore, conventional rigid-body–based modeling and control methods issue challenges for accurate TCP positioning. To compensate for these errors, we compute the pose error between the TCP and an object of interest (OOI) directly in the camera frame, while using motion-based local calibration to find the extrinsic sensor-to-robot correspondence. The proposed pipeline for local calibration is twofold: first, the detected tool is oriented perpendicularly with respect to the OOI. Second, range adjustment is performed in the local planar plane by exploiting the visual measurements. Two methods for adjusting the range were examined: a line equation–based method and a trajectory matching–based method. Real-time experiments were conducted using a HDLR manipulator with a 5 m reach, and visual fiducial markers were used as detectable objects for the visual sensor. The experimental results demonstrated that the proposed methodology can provide sub-centimeter positioning accuracy, which is very challenging to achieve with HDLR manipulators due to their characteristic uncertainties.