A novel machine learning framework that is able to consistently detect, localize, and measure the severity of human congenital cleft lip anomalies is introduced. The ultimate goal is to fill an important clinical void: to provide an objective and clinically feasible method of gauging baseline facial deformity and the change obtained through reconstructive surgical intervention. The proposed method first employs the StyleGAN2 generative adversarial network with model adaptation to produce a normalized transformation of 125 faces, and then uses a pixel-wise subtraction approach to assess the difference between all baseline images and their normalized counterparts (a proxy for severity of deformity). The pipeline of the proposed framework consists of the following steps: image preprocessing, face normalization, color transformation, heat-map generation, morphological erosion, and abnormality scoring. Heatmaps that finely discern anatomic anomalies visually corroborate the generated scores. The proposed framework is validated through computer simulations as well as by comparison of machine-generated versus human ratings of facial images. The anomaly scores yielded by the proposed computer model correlate closely with human ratings, with a calculated Pearson's r score of 0.89. The proposed pixel-wise measurement technique is shown to more closely mirror human ratings of cleft faces than two other existing, state-of-the-art image quality metrics (Learned Perceptual Image Patch Similarity and Structural Similarity Index). The proposed model may represent a new standard for objective, automated, and real-time clinical measurement of faces affected by congenital cleft deformity.
Read full abstract