Aim: Depth information plays a key role in enhanced perception and interaction in image-guided surgery. However, it is difficult to obtain depth information with monocular endoscopic surgery due to a lack of reliable cues for perceiving depth. Although there are reprojection loss-based self-supervised learning techniques to estimate depth and pose, the temporal information from the adjacent frames is not efficiently utilized to handle occlusion in surgery. Methods: We design long-term reprojection loss (LT-RL) self-supervised monocular depth estimation techniques by integrating longer temporal sequences into reprojection to learn better perception and to address occlusion artifacts in image-guided laparoscopic and robotic surgery. For this purpose, we exploit four temporally adjacent source frames before and after the target frame, where conventional reprojection loss uses two adjacent frames. The pixels that are visible in the target frame but occluded in the immediate two adjacent frames will produce the inaccurate depth but a higher chance to appear in the four adjacent frames during the calculation of minimum reprojection loss. Results: We validate LT-RL on the benchmark surgical datasets of Stereo correspondence and reconstruction of endoscopic data (SCARED) and Hamlyn to compare the performance with other state-of-the-art depth estimation methods. The experimental results show that our proposed technique yields 2%-4% better root-mean-squared error (RMSE) over the baselines of vanilla reprojection loss. Conclusion: Our LT-RL self-supervised depth and pose estimation technique is a simple yet effective method to tackle occlusion artifacts in monocular surgical video. It does not add any training parameters, making it flexible for integration with any network architecture and improving the performance significantly.