Abstract

Nasotracheal intubation (NTI) is one of the most commonly performed procedures in anesthesia and is considered the gold standard for securing the airway of patients. Endoscope operation is critical to the success of NTI. However, this operation remains challenging as it requires the surgeon to classify anatomical landmarks and detect heading targets of the endoscope tip in a sequence of monocular images. To alleviate this problem, this study presents a learning-based navigation method that automatically classifies four different anatomical landmarks and detects the heading target of the endoscope tip from endoscopic images. First, an end-to-end multitask network is introduced that consists of a branch for anatomical landmark classification and another for heading target detection. In addition, a convolutional attention module is designed to improve network performance by combining spatial and channel attention. Second, an endoscopic dataset named intuNav is built for network training. The trained network calculates navigation information without image prior knowledge. Finally, extensive experiments on the built dataset and endoscopic videos demonstrate the high performance of our method, achieving classification (94%) and detection (79.4%) accuracies. The results also indicate that the proposed method is effective and efficient in navigation generation during NTI.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call