Optical coherence tomography angiography (OCTA) can visualize retinal microvasculature and is important to qualitatively and quantitatively identify potential biomarkers for different retinal diseases. However, the resolution of optical coherence tomography (OCT) angiograms inevitably decreases when increasing the field-of-view (FOV) given a fixed acquisition time. To address this issue, we propose a novel reference-based super-resolution (RefSR) framework to preserve the resolution of the OCT angiograms while increasing the scanning area. Specifically, textures from the normal RefSR pipeline are used to train a learnable texture generator (LTG), which is designed to generate textures according to the input. The key difference between the proposed method and traditional RefSR models is that the textures used during inference are generated by the LTG instead of being searched from a single reference (Ref) image. Since the LTG is optimized throughout the whole training process, the available texture space is significantly enlarged and no longer limited to a single Ref image, but extends to all textures contained in the training samples. Moreover, our proposed LTGNet does not require an Ref image at the inference phase, thereby becoming invulnerable to the selection of the Ref image. Both experimental and visual results show that LTGNet has competitive performance and robustness over state-of-the-art methods, indicating good reliability and promise in real-life deployment. The source code is available at https://github.com/RYY0722/LTGNet.