With asymmetric resolution stereo images as input, existing stereo matching algorithms significantly decline in prediction performance. To address this, we introduce SGANet (Super-resolution Guided Asymmetric Stereo Matching Network), a model that employs unsupervised training methods to overcome the difficulty of acquiring ground truth disparity. For the lower resolution side, this paper designs a stereo guided super-resolution module (SGSR), where the network generates a super-resolved image enriched with details guided by the higher resolution side. Additionally, for this module, we propose a feature consistency loss based on the image's feature space to measure the similarity between the real and super-resolved images. Experimental results on the autonomous driving dataset KITTI demonstrate the effectiveness of the SGSR module and the feature consistency loss in improving the disparity prediction performance of asymmetric resolution stereo images.