The success of parathyroidectomy in primary hyperparathyroidism depends on the intraoperative differentiation of diseased from normal glands. Deep learning can potentially be applied to digitalize this subjective interpretation process that relies heavily on surgeon expertise. In this study, we aimed to investigate whether diseased versus normal parathyroid glands have different near-infrared autofluorescence (NIRAF) signatures and whether related deep learning models can predict normal versus diseased parathyroid glands based on intraoperative in-vivo images. This prospective study included patients who underwent parathyroidectomy for primary hyperparathyroidism or thyroidectomy using intraoperative NIRAF imaging at a single tertiary referral center from November 2019 to March 2024. Autofluorescence intensity and heterogeneity index of normal versus diseased parathyroid glands were compared, and a deep learning model was developed. NIRAF images of a total of 1,506 normal and 597 diseased parathyroid glands from 797 patients were analyzed. Normal versus diseased glands exhibited a higher median normalized NIRAF intensity [2.68 (2.19-3.23) vs 2.09 (1.68-2.56) pixels, p<.0001] and lower heterogeneity index [0.11 (0.08-0.15) vs 0.18 (0.13-0.23), p<.0001]. On receiver operating characteristics analysis, optimal thresholds to predict a diseased gland were 2.22 in pixel intensity and 0.14 in heterogeneity index. On deep learning, precision and recall of the model were 83.3% each, and area under the precision-recall curve (AUPRC) was 0.908. Normal and diseased parathyroid glands in primary hyperparathyroidism have different intraoperative NIRAF patterns that could be quantified with intensity and heterogeneity analyses. Visual deep learning models relying on these NIRAF signatures could be built to assist surgeons in differentiating normal from diseased parathyroid glands.
Read full abstract