Single image super-resolution (SISR) refers to the process of reconstructing a high-resolution (HR) image from a low-resolution (LR) input image. Deep learning super-resolution algorithms have widely been used to solve SISR tasks. However, the demanding computation cost and memory occupation incurred through training the deep learning models has been hindering its real-world application. In this paper, we rebuild FSRCNN and apply it to solve SISR tasks. Firstly, we change the original training dataset to RealSR, a larger dataset consisting of real-world images. Secondly, channel attention and residual blocks have been applied to the mapping layers and important parameters including learning rate and optimizer have been reset. Thirdly, we change the cost function from loss to loss and replace the activation function from parametric rectified linear unit (PReLU) to exponential linear unit (ELU), to verify the discrepancies between different loss functions and activation functions. Finally, we compare the rebuilt models with the official FSRCNN based on the Peak signal-to-noise ratio(PSNR) and thestructural similarity index measure(SSIM) on three common test datasets. The original model achieves better performance on all the test datasets across different scale factors while the rebuilt models show better generalization capability. Our analyses illustrate that residual blocks can slightly promote model performance while different loss functions and activation functions do not generate an evident impact on the rebuilt model.