This paper presents a super-resolution (SR) technique for enhancement of infrared (IR) images. The suggested technique relies on the image acquisition model, which benefits from the sparse representations of low-resolution (LR) and high-resolution (HR) patches of the IR images. It uses bicubic interpolation and minimum mean square error (MMSE) estimation in the prediction of the HR image with a scheme that can be interpreted as a feed-forward neural network. The suggested algorithm to overcome the problem of having only LR images due to hardware limitations is represented with a big data processing model. The performance of the suggested technique is compared with that of the standard regularized image interpolation technique as well as an adaptive block-by-block least-squares (LS) interpolation technique from the peak signal-to-noise ratio (PSNR) perspective. Numerical results reveal the superiority of the proposed SR technique.