Based on the generative adversarial network (GAN), we present a multifunctional X-ray tomographic protocol for artifact correction, noise suppression, and super-resolution of reconstruction. The protocol mainly consists of a data preprocessing module and multifunctional GAN-based loss function simultaneously dealing with ring artifacts and super-resolution. The experimental protocol removes ring artifacts and improves the contrast-to-noise ratio (CNR) and spatial resolution (SR) of reconstructed images successfully, which shows the capability to adaptively rectify ring artifacts with varying intensities and types while achieving super-resolution. Compared with the main existing deep learning models or conventional tomographic correction methods, it also enables higher processing speed and minimal information loss, especially for images of smaller dimensions. This study provides a robust optimization tool for the equivalent realization of large fields of view and high-resolution X-ray tomography. The experimental datasets were collected from a series of X-ray cone-beam computed tomography scans of biological samples.