Stereo matching acts a crucial role in computer vision and robotics applications. An accurate cost volume and robust disparity regression method are essential for stereo matching of high accuracy. Following GCNet and PSMNet, constructing 4D cost volume and then using the soft argmin method to regress has been dominated. However, it will encounter many difficulties due to the multi-modal distribution of cost volume. One of the reasons for this multi-modal distribution is the occlusion area which not be possible to find a matching region on the reference image and rarely discussed. In this paper, we propose to use global context information could improve the performance of model in occluded regions. Recently, novel recurrent neural network regression methods are proposed, but most of them regress disparity maps from 3D cost volume. In this paper, we propose the new combinatorial paradigm that combine stacked hourglass modules and recurrent neural networks to further aggregate 4D cost volume and regress disparity respectively. The proposed method can be seamlessly integrated into most stereo matching networks, we improved the accuracy by 45% for PSMNet and 38% for GwcNet in our experiment. Experimental results on Scene Flow, KITTI2012, KITTI2015, and ETH3D datasets show our method is competitive. The code is available at: https://github.com/truman1211/HCRnet.