Deep learning-based methods have made remarkable progress for stereo matching in terms of accuracy. However, two issues still hinder producing a perfect disparity map: (1) blurred boundaries and the discontinuous disparity of a continuous region on disparity estimation maps, and (2) a lack of effective means to restore resolution precisely. In this paper, we propose to utilize multiple frequency inputs and an attention mechanism to construct the deep stereo matching model. Specifically, high-frequency and low-frequency information of the input image together with the RGB image are fed into a feature extraction network with 2D convolutions. It is conducive to produce a distinct boundary and continuous disparity of the smooth region on disparity maps. To regularize the 4D cost volume for disparity regression, we propose a 3D context-guided attention module for stacked hourglass networks, where high-level cost volumes as context guide low-level features to obtain high-resolution yet precise feature maps. The proposed approach achieves competitive performance on SceneFlow and KITTI 2015 datasets.