Abstract

At the current technology level, a human examiner's review must be accompanied to compensate for the insufficient commercial artificial intelligence (AI) performance. This study aimed to investigate the effects of the human examiner's expertise on the efficacy of AI analysis, including time-saving and error reduction. Eighty-four pretreatment cephalograms were randomly selected for this study. First, human examiners (one beginner and two regular examiners) manually detected 15 cephalometric landmarks and measured the required time. Subsequently, commercial AI services automatically identified these landmarks. Finally, the human examiners reviewed the AI landmark determination and adjusted them as needed while measuring the time required for the review process. Then, the elapsed time was compared statistically. Systematic and random errors among examiners (human examiners, AI and their combinations) were assessed using the Bland-Altman analysis. Intraclass correlation coefficients were used to estimate the inter-examiner reliability. No clinically significant time difference was observed regardless of AI use. AI measurement error decreased substantially after the review of the human examiner. From the standpoint of the human examiner, beginners could obtain better results than manual landmarking. However, the AI review outcomes of the regular examiner were not as good as those of manual analysis, possibly due to AI-dependent landmark decisions. The reliability of AI analysis could also be improved by employing the human examiner's review. Although the time-saving effect was not evident, commercial AI cephalometric services are currently recommendable for beginners.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call