Hyperspectral small target detection (HSTD) is a promising pixel-level detection task. However, due to the low contrast and imbalanced number between the target and the background spatially and the high dimensions spectrally, it is a challenging one. To address these issues, this work proposes a representation-learning-based graph and generative network for hyperspectral small target detection. The model builds a fusion network through frequency representation for HSTD, where the novel architecture incorporates irregular topological data and spatial–spectral features to improve its representation ability. Firstly, a Graph Convolutional Network (GCN) module better models the non-local topological relationship between samples to represent the hyperspectral scene’s underlying data structure. The mini-batch-training pattern of the GCN decreases the high computational cost of building an adjacency matrix for high-dimensional data sets. In parallel, the generative model enhances the differentiation reconstruction and the deep feature representation ability with respect to the target spectral signature. Finally, a fusion module compensates for the extracted different types of HS features and integrates their complementary merits for hyperspectral data interpretation while increasing the detection and background suppression capabilities. The performance of the proposed approach is evaluated using the average scores of AUCD,F, AUCF,τ, AUCBS, and AUCSNPR. The corresponding values are 0.99660, 0.00078, 0.99587, and 333.629, respectively. These results demonstrate the accuracy of the model in different evaluation metrics, with AUCD,F achieving the highest score, indicating strong detection performance across varying thresholds. Experiments on different hyperspectral data sets demonstrate the advantages of the proposed architecture.