Visual simultaneous location and mapping (VSLAM) has been widely used today, but for some tough cases with insufficient feature points (such as weak texture and motion blur), it is still an ongoing challenge to further boost the corresponding robustness and accuracy. In this paper, to address the problem of inadequate feature points, for conventional visual SLAM, we improve the original feature matching mechanism by providing more correspondences with higher position precision. Firstly, based on the original feature matching results in the map initialization stage, a single-point least square matching (LSM) which iteratively refines position of the matching points by considering geometric and photometric consistency, is presented to improve the map initialization result. Secondly, to improve the tracking performance, given the motion model, the single-point LSM is again used to yield new correspondences from the unsuccessfully matched features of original matching mechanism between current frame and previous frame in the tracking thread, and these extra correspondences participate in and boost the subsequent pose optimization. Lastly, for newly inserted keyframe, we focus on identifying the corresponding 3D map points’ reprojections on relevant local neighbor keyframes, these reprojections are added in the local bundle adjustment (BA) optimization for further improving the estimation of camera pose and local map. The popular VSLAM package, ORB-SLAM2, is used to demonstrate the efficacy of the proposed improvements. An ablation study on accuracy and tracking stability on public datasets are reported, furthermore, the comparisons with several state of the arts (namely, VINS-Mono, ORB-SLAM3, DSO, PL-SLAM) are investigated. The experimental results show that our approach can improve accuracy in weak texture scenes and motion blur cases, reduce the number of tracking loss and improve tracking robustness.