Abstract

Despite the wide range of cleft lip morphology, consistent scales to categorize preoperative severity do not exist. Machine learning has been used to increase accuracy and efficiency in detection and rating of multiple conditions, yet it has not been applied to cleft disease. The authors tested a machine learning approach to automatically detect and measure facial landmarks and assign severity grades using preoperative photographs. Preoperative images were collected from 800 unilateral cleft lip patients, manually annotated for cleft-specific landmarks, and rated using a previously validated severity scale by eight expert reviewers. Five convolutional neural network models were trained for landmark detection and severity grade assignment. Mean squared error loss and Pearson correlation coefficient for cleft width ratio, nostril width ratio, and severity grade assignment were calculated. All five models performed well in landmark detection and severity grade assignment, with the largest and most complex model, Residual Network, performing best (mean squared error, 24.41; cleft width ratio correlation, 0.943; nostril width ratio correlation, 0.879; severity correlation, 0.892). The mobile device-compatible network, MobileNet, also showed a high degree of accuracy (mean squared error, 36.66; cleft width ratio correlation, 0.901; nostril width ratio correlation, 0.705; severity correlation, 0.860). Machine learning models demonstrate the ability to accurately measure facial features and assign severity grades according to validated scales. Such models hold promise for the creation of a simple, automated approach to classifying cleft lip morphology. Further potential exists for a mobile telephone-based application to provide real-time feedback to improve clinical decision making and patient counseling.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call