To generate and investigate a supervised deep learning algorithm for creating synthetic computed tomography (sCT) images from kilovoltage cone-beam computed tomography (kV-CBCT) images for adaptive radiation therapy (ART) in head and neck cancer (HNC). This study generated the supervised U-Net deep learning model using 3,491 image pairs from planning computed tomography (pCT) and kV-CBCT datasets obtained from 40 HNC patients. The dataset was split into 80% for training and 20% for testing. The evaluation of the sCT images compared to pCT images focused on three aspects: Hounsfield units accuracy, assessed using mean absolute error (MAE) and root mean square error (RMSE); image quality, evaluated using the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) between sCT and pCT images; and dosimetric accuracy, encompassing 3D gamma passing rates for dose distribution and percentage dose difference. MAE, RMSE, PSNR, and SSIM showed improvements from their initial values of 53.15 ± 40.09, 153.99 ± 79.78, 47.91 ± 4.98 dB, and 0.97 ± 0.02 to 41.47 ± 30.59, 130.39 ± 78.06, 49.93 ± 6.00 dB, and 0.98 ± 0.02, respectively. Regarding dose evaluation, 3D gamma passing rates for dose distribution within sCT images under 2%/2 mm, 3%/2 mm, and 3%/3 mm criteria, yielded passing rates of 92.1% ± 3.8%, 93.8% ± 3.0%, and 96.9% ± 2.0%, respectively. The sCT images exhibited minor variations in the percentage dose distribution of the investigated target and structure volumes. However, it is worth noting that the sCT images exhibited anatomical variations when compared to the pCT images. These findings highlight the potential of the supervised U-Net deep learningmodel in generating kV-CBCT-based sCT images for ART in patients with HNC.