Abstract

PURPOSE: Artificial intelligence transforms the ability to learn from complex data. The goal of this study was to create a validated machine learning model to identify differences in facial feature curvature by gender. METHODS: Three-dimensional photos of 75 men and 75 women aged 20-29 were each divided into 100 cross-sectional images and used as training, validation, and test data to build a convolutional neural network that could classify gender based on facial imaging through curvature analysis. Gradient-weighted Class Activation Mapping was then implemented to measure the facial curvatures used by the model as the basis of inference when determining gender. RESULTS: The facial target area was bounded superiorly by the eyebrows, inferiorly by the upper vermillion, and laterally by the cheekbone apices. The model could classify gender with over 90% accuracy using any sagittal plane within the target area for females and any lateral sagittal plane for males. The curvature of the supraorbital ridge and nasal ala, tip, and dorsum determined masculine features while the cheeks, eyelids, and glabella determined feminine features. CONCLUSION: The machine learning model identified novel features as determinants of gender that currently do not serve as areas of focus for gender-affirming surgery. The objective curvature measurements advance the sparse, subjective literature on facial feminization/masculinization and can be applied to pre-operative planning for all facial reconstruction or aesthetic procedures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call