Abstract

Surgeons determine the treatment method for patients with epiglottis obstruction based on its severity, often by estimating the obstruction severity (using three obstruction degrees) from the examination of drug-induced sleep endoscopy images. However, the use of obstruction degrees is inadequate and fails to correspond to changes in respiratory airflow. Current artificial intelligence image technologies can effectively address this issue. To enhance the accuracy of epiglottis obstruction assessment and replace obstruction degrees with obstruction ratios, this study developed a computer vision system with a deep learning-based method for calculating epiglottis obstruction ratios. The system employs a convolutional neural network, the YOLOv4 model, for epiglottis cartilage localization, a color quantization method to transform pixels into regions, and a region puzzle algorithm to calculate the range of a patient's epiglottis airway. This information is then utilized to compute the obstruction ratio of the patient's epiglottis site. Additionally, this system integrates web-based and PC-based programming technologies to realize its functionalities. Through experimental validation, this system was found to autonomously calculate obstruction ratios with a precision of 0.1% (ranging from 0% to 100%). It presents epiglottis obstruction levels as continuous data, providing crucial diagnostic insight for surgeons to assess the severity of epiglottis obstruction in patients.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call