It is a challenge to achieve high compression rates for remote sensing images because they have rich information and complex backgrounds. Long-range context information can help the model to identify spatial redundant features. Therefore, we propose a long-range convolution compression network (LRCompNet) for remote sensing images. In order to capture long-range context information, we first propose a long-range convolution, and a light-weight compression model is designed. Additionally, an improved non-local attention model is proposed to reduce the computation complexity in order to accommodate remote sensing image compression. We also collect two remote sensing datasets (GoogleMap dataset and GF7 dataset) to test the proposed method and existing codecs, such as JPEG, JPEG2000 and learned compression models. We assess the overall performance of our method in terms of rate-distortion curves and execution time, respectively. Results on both datasets show that the proposed compression network achieves improved rate-distortion performance. Furthermore, the proposed method significantly reduces the time complexity when compared to other methods.