Remote-sensing (RS) image super-resolution (SR) aims to recover high-resolution (HR) images from the corresponding low-resolution (LR) images. In recent years, the SR methods based on convolutional neural networks (CNNs) have achieved incredible performance in case of fixed scale factors (e.g., ×2, ×3, and ×4). However, these methods need to train a single model for each scale factor, and fail to directly reconstruct the HR image of decimal factors. To solve the lack of research on arbitrary scale of RS image SR, we propose a novel amplification module called amplification-arbitrary module (A <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> M). A <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sub> M can be easily embedded in the tail of the previous SR networks, so that the previous networks can also achieve end-to-end arbitrary scale SR. Specifically, we first utilize the combination of convolutional and pixelshuffle layers to zoom in the deep feature matrix 2×, 3×, and 4× along spatial dimension. Information cross transmission (ICT) is then utilized to gather information of multiple spatial sizes. ICT is not only beneficial to enrich the diversity of information, but also can avoid training only a single branch in the training stage. To make better use of multi-scale features, we designed an efficient signal weighting unit (SWU) to generate a correlation matrix at a small cost, and then the signals of multi-scale features at the same position are fused according to the correlation matrix. Experimental results on RS and generic datasets demonstrate that our method with single pre-training model can perform well at any scale factors.