Machine vision systems have demonstrated practical applications in various fields where the perception of the environment is essential. Stereo vision systems are one of the most used methods to perform three-dimensional mapping. These systems have several considerations in order to fulfill this task; One of them is camera calibration. This is why this paper proposes a novel camera calibration method for improving depth estimation accuracy in stereo vision systems. The method addresses the lens distortion issue by adjusting the surface point's pixel coordinates before the triangulation process. The new pixel coordinates are computed using calibration coefficients, which are calculated through multivariate quadratic regression. Also, the proposed method corrects the relative orientation of the cameras and computes a compensation angle. The proposed method is remarkably robust to varying illumination levels because it adjusts pixel positions independently of image content. In comparison to traditional methods, this method requires fewer steps to implement, as it only requires one image per camera. The proposed method could have significant implications for fields such as autonomous navigation and robotics, where depth estimation is a crucial component. Overall, this paper presents a valuable contribution to the field of computer vision by offering a new, efficient and effective approach to camera calibration for stereo vision systems. The performed experiments demonstrated that the proposed method leads to improved depth estimation in the stereo vision system with an improvement of 34.15% on the MAE and 48.38% on the STD when compared to one of the most commonly used methods.