Abstract
Judging swallowing kinematic impairments via videofluoroscopy represents the gold standard for the detection and evaluation of swallowing disorders. However, the efficiency and accuracy of such a biomechanical kinematic analysis vary significantly among human judges affected mainly by their training and experience. Here, we showed that a novel machine learning algorithm can with high accuracy automatically detect key anatomical points needed for a routine swallowing assessment in real-time. We trained a novel two-stage convolutional neural network to localize and measure the vertebral bodies using 1518 swallowing videofluoroscopies from 265 patients. Our network model yielded high accuracy as the mean distance between predicted points and annotations was 4.20 ± 5.54pixels. In comparison, human inter-rater error was 4.35 ± 3.12pixels. Furthermore, 93% of predicted points were less than five pixels from annotated pixels when tested on an independent dataset from 70 subjects. Our model offers more choices for speech language pathologists in their routine clinical swallowing assessments as it provides an efficient and accurate method for anatomic landmark localization in real-time, a task previously accomplished using an off-line time-sinking procedure.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.