We present a novel vector quantization (VQ) module for the two state-of-the-art long-range simultaneous localization and mapping (SLAM) algorithms. The VQ task in SLAM is generally performed using unsupervised methods. We provide an alternative approach trough embedding a semisupervised hyperbolic graph convolutional neural network (HGCN) in the VQ step of the SLAM processes. The SLAM platforms we have utilized for this purpose are fast appearance-based mapping (FABMAP) and oriented fast and rotated short (ORB), both of which rely on extracting the features of the captured images in their loop closure detection (LCD) module. For the first time, we have considered the space formed by these SURF features, robust image descriptors, as a graph, enabling us to apply an HGCN in the VQ section which results in an improved LCD performance. The HGCN vector quantizes the SURF feature space, leading to a bag-of-word (BoW) representation construction of the images. This representation is subsequently used to determine LCD accuracy and recall. Our approaches in this study are referred to as HGCN-FABMAP and HGCN-ORB. The main advantage of using HGCN in the LCD section is that it scales linearly when the features are accumulated. The benchmarking experiments show the superiority of our methods in terms of both trajectory generation accuracy in small-scale paths and LCD accuracy and recall for large-scale problems.