Abstract

Face parsing refers to the labeling of each facial component in a face image and has been employed in facial stimulation, expression recognition, and makeup use, effectively providing a basis for further analysis, computations, animation, modification, and numerous other applications. Although existing face parsing methods have demonstrated good performance, they fail to extract rich features and recover accurate segmentation maps, particularly for faces with high variations in expression and sufficiently similar appearances. Moreover, these approaches neglect the semantic gaps and dependencies between facial categories and their boundaries. To address these drawbacks, we propose an efficient dilated convolution network with different aspect ratios to attain accurate face parsing of the output by applying the feature extraction capability. The proposed network-structured multiscale dilated encoder–decoder convolution model obtains rich component information and efficiently improves the capture of global information by obtaining low- and high-level semantic features. To achieve a delicate parsing output of the face components along the borders and analyze the connections between the face categories and their border edges, the semantic edge map is learned using a conditional random field, which aims to distinguish border and non-border pixels during the modeling. We conducted experiments using three well-known publicly available face databases. The recorded results demonstrate the high accuracy and capacity of the proposed method in comparison to previous state-of-art methods. Our proposed model achieved a mean accuracy of 90% on the CelebAMask-HQ dataset for the category case and 81.43% for the accessory case, and achieved accuracies of 91.58% and 92.44% on the HELEN and LaPa datasets, respectively, thereby demonstrating its effectiveness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call