Existing hepatocellular carcinoma (HCC) prediction models are mainly derived from pretreatment or early on-treatment parameters. We aim to reassess the dynamic changes in the performance of 17 HCC models in chronic hepatitis B (CHB) patients during long-term antiviral therapy (AVT). Among 987 CHB patients administered long-term entecavir therapy, 660 patients had 8 years of follow-up data. Model scores were calculated using on-treatment values at years 2.5, 3, 3.5, 4, 4.5, and 5 of AVT to predict three-year HCC occurrence. Model performance was assessed with the area under the receiver operating curve (AUROC). The utility of the original model cutoffs to distinguish different levels of HCC risks was evaluated by the log-rank test. The AUROCs of the 17 HCC models varied from 0.51 to 0.78 when using on-treatment scores from years 2.5 to 5. Models with the cirrhosis variable showed numerically higher AUROCs (pooled AUROCs at 0.65-0.73 for treated, untreated or mixed treatment models) than models without the cirrhosis variable (treated or mixed models: AUROCs at 0.61-0.68; untreated models: AUROCs at 0.51-0.59). The stratification into low-, intermediate-, and high-risk levels using the original cutoff values could no longer reflect the true HCC incidence using scores after AVT year 3.5 for models without cirrhosis and after AVT year 4 for models with cirrhosis. The performance of existing HCC prediction models, especially models without the cirrhosis variable, decreased in CHB patients on long-term AVT. It would be justified to optimize the existing models or develop novel models for better HCC prediction during long-term AVT.