Abstract

Facial anthropometry based on 3-dimensional (3D) imaging technology, or 3D photogrammetry, has gained increasing popularity among surgeons. It outperforms direct measurement and 2-dimensional (2D) photogrammetry because of many advantages. However, a main limitation of 3D photogrammetry is the time-consuming process of manual landmark localization. To address this problem, this study developed a U-NET-based deep learning algorithm to enable automated and accurate anatomical landmark detection on 3D facial models. The main structure of the algorithm stacked 2 U-NETs. In each U-NET block, we used 3×3 convolution kernel and rectified linear unit (ReLU) as activation function. A total of 200 3D images of healthy cases, acromegaly patients, and localized scleroderma patients were captured by Vectra H1 handheld 3D camera and input for algorithm training. The algorithm was tested to detect 20 landmarks on 3D images. Percentage of correct key points (PCK) and normalized mean error (NME) were used to evaluate facial landmark detection accuracy. Among healthy cases, the average NME was 1.4 mm. The PCK reached 90% when the threshold was set to the clinically acceptable limit of 2 mm. The average NME was 2.8 and 2.2 mm among acromegaly patients and localized scleroderma patients, respectively. This study developed a deep learning algorithm for automated facial landmark detection on 3D images. The algorithm was innovatively validated in 3 different groups of participants. It achieved accurate landmark detection and improved the efficiency of 3D image analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call