The mutual transfer of spatiotemporal features is the main challenge for the two-stream video salient object detection. Current methods adopt the spatiotemporal feature interaction to achieve it. However, these methods still have two issues: modal feature gap and layer feature gap. To address these, we propose a Bridging Spatiotemporal feature Gap Network (BSGNet) with a global correspondence interaction and gate filtering (GCGF) module, a global-local distribution consistency (GLDC) module, and a modality-layer feature fusion framework (MLFF). Compared with previous works, BSGNet not only explores more effective interaction by GCGF, but also bridges modality and layer feature gaps by GLDC and MLFF. Firstly, GCGF achieves the spatiotemporal feature interaction by modeling intra-modal and inter-modal global correspondences. Besides, GCGF employs a gate mechanism to control the proportion of message transfer between appearance and motion information, which characterizes the contribution provided by spatiotemporal features. Secondly, at both global and local levels, GLDC pushes the spatiotemporal feature distribution between same scenes, and pulls the spatiotemporal feature distribution between different scenes. This can enhance the distribution consistency to align spatiotemporal features and bridge modal feature gap. Finally, MLFF designs an inter-modal and inter-layer feature fusion framework to bridge the layer feature gap brought by the different modalities and different receptive fields. Extensive experiments on five benchmarks reveal that our BSGNet outperforms state-of-the-arts.