Abstract
Dacryocystography (DCG) has been used to illustrate the morphological and functional aspects of the lacrimal drainage system in the evaluation of patients with maxillofacial trauma and epiphora. This study developed deep-learning models for the automatic classification of the status of the lacrimal passage based on DCG. The authors collected 719 DCG images from 430 patients with nasolacrimal duct obstruction. The obstruction images were further manually categorized into 2 binary categories based on the location of the obstruction: (1) upper obstruction and (2) lower obstruction. An upper obstruction was defined as one occurring within the canaliculus or common canaliculus, whereas a lower obstruction was defined as one within the lacrimal sac, duct-sac junction, or nasolacrimal duct. The authors then established a deep-learning model to automatically determine whether a passage was patent or obstruction. The accuracy, precision, sensitivity, F1 score, and area under the receiver operating characteristic curve for the evaluation set of each deep-learning model were 99.3%, 98.8%, 99.5%, 99.2%, and 0.9998, respectively, for obstruction detection, and 95.5%, 93.0%, 93.0%, 93.0%, and 0.9778 for classifying the obstruction location. Both receiver operating characteristic curves were skewed toward the left-upper region, indicating the high reliability of these models. The high accuracies of the obstruction detection model (99.3%) and the obstruction classification model (95.5%) demonstrate that deep-learning models can be reliable diagnostic tools for DCG images. This deep-learning model could enhance diagnostic consistency, enable non-specialists to interpret results accurately and facilitate the efficient allocation of medical resources.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.