Existing protocols for image-guided adaptive brachytherapy (BT) pose distinct limitations, which include: (1) the possible misalignment of the application position as a consequence of patient translocation between the treatment room and the computed tomography (CT)/magnetic resonance imaging (MRI) suite, (2) an inherent inability to continually monitor physiological and anatomical changes during the course of 3–10 fraction treatment regimens, and (3) insufficient tracing of the radioactive source movement within the patient's body. In response to these challenges, this study proposes an innovative deep-learning-based C-arm CT/SPECT imaging system tailored specifically for image-guided adaptive BT. This technological advancement aims to enable the execution of high-precision online adaptive BT, thereby amplifying the adaptability and effectiveness of oncological care. An image dataset of 66 non-contrast pelvic CT studies was acquired and limited-angle cone-beam CT (CBCT) images were generated by mathematically calculating sinograms for voxel phantoms based on these CT images. Using a Generative Adversarial Network (GAN), low-quality CBCT images obtained using a 110° limited-angle rotating C-arm system were transformed into high-quality diagnostic CT images. The performances of these networks were evaluated by comparing them with CBCT images reconstructed using an iterative reconstruction method and ground-truth images. Synthetic CT images were successfully transformed from low-quality CBCT images, considerably reducing streaking artifacts and preserving the anatomical structures. Our deep-learning-based image reconstruction technique could successfully improve the image quality of limited-angle CBCT, similar to the ground-truth image quality, with a faster speed than the latest iterative reconstruction algorithm. We expect that this will be effectively implemented in online image-guided adaptive BT and patient dose verification.