Abstract

BackgroundRhinoplasty enhances facial symmetry and functionality. However, the accurate and reliable quantification of nasal defects pre-surgery remains an ongoing challenge. AimThis study introduces a novel approach for defect quantification using 2D images and artificial intelligence, providing a tool for better preoperative planning and improved surgical outcomes. Materials and MethodsA pre-trained AI model for facial landmark detection was utilised on a dataset of 250 images of male patients aged 18 to 24 who underwent rhinoplasty for cosmetic nasal deformity correction. The analysis concentrated on 36 different distances between the facial landmarks. These distances were normalised using min-max scaling to counter image size and quality variations. Post-normalisation, statistical parameters, including mean, median, and standard deviation, were calculated to identify and quantify nasal defects. ResultsThe methodology was tested and validated using images from different ethnicities and regions, showing promising potential as a beneficial surgical aid. The normalised data produced reliable quantifications of nasal defects (average 76.2%), aiding in preoperative planning and improving surgical outcomes and patient satisfaction. ApplicationsThe developed method can be extended to other facial plastic surgeries. Furthermore, it can be used to create app-based software, assist medical education, and improve patient-doctor communication. ConclusionThis novel method for defect quantification in rhinoplasty using AI and image processing holds significant potential in improving surgical planning, outcomes, and patient satisfaction, marking an essential step in the fusion of AI and plastic surgery.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call