Assessing the visual quality of super-resolution images (SRIs) is crucial for advancing algorithm development, but it remains an unsolved problem. In this paper, we present a novel reduced-reference image quality assessment (RR-IQA) method specifically suited for evaluating SRIs. Our approach leverages information from the input low-resolution (LR) image as a reference signal to extract features that are most relevant to modeling the visual quality of SRIs. We analyze the artifact characteristics of SRIs and demonstrate that features describing edge orientations, high frequency components, and textures are the most important for this task. To extract these features, we first perform structure–texture decompositions (STD) on both the SRI and its LR input, then obtain the edge orientation feature through a traditional hand-crafted approach, and deep neural networks to extract features related to high frequency components and textures. We employ a shallow multilayer perceptron (MLP) to predict an image quality score based on these quality-relevant features. To improve feature representation ability and prevent overfitting, we pretrain the feature extraction module using a large number of unlabeled samples, which accounts for over 99.8% of the total model parameters. We use precious samples with mean opinion score (MOS) labels to train a high-quality shallow MLP predictor. Our experimental results show that the proposed method outperforms classical and state-of-the-art models.