Vision-assisted technologies in industrial manual operation such as augmented reality (AR) are increasingly popular. They require high positioning accuracy and robustness to operate normally. However, narrow spaces, moving hands, or tools may occlude or obscure local visual features of operation environments and thus affect the positioning accuracy and robustness of operating position. The resultant misguidance may even cause misoperation of operators. This paper proposes a marker-less monocular vision point positioning method for vision-assisted manual operation in industrial environments. The proposed method can accurately and robustly locate the target point of operation using constraint minimization method even if the target area has no corresponding visual features in the case of occlusion and improper illumination. The proposed method has three phases: intersection generation, intersection optimization, and target point solving. In the intersection generation phase, a certain number of intersections of epipolar lines are generated as candidate target points using fundamental matrices. Here, the solving constraint is converted from point-to-line to point-to-points. In the intersection optimization phase, the intersections are optimized to two different sets through the iterative linear fitting and geometric mean absolute error methods. Here, the solving constraint is further converted from point-to-points to point-to-point sets. In the target point solving phase, the target point is solved as a constrained minimization problem based on the distribution constraint of the two intersection sets. Here, the solving constraint is finally converted from point-to-point sets to point-to-point, and the unique optimal solution is obtained as the target point. The experimental results show that this method has a better accuracy and robustness than the traditional homography matrix method for the practical industrial operation scenes.