Abstract

This paper presents a method for automatic processing of human faces from color images. We describe a new system, which works hierarchically from detecting the position of human faces and their features (such as eyes, nose, mouth, etc.) and to extracting their contours and feature points. The position of human faces and their parts are detected from the image by applying the integral projection method, which uses both the color information (skin and hair color) and edge information (intensity and sign). A multiple active contour model is used to extract the contour-lines of facial features. To do this, we use color information in their energy terms. Facial feature points are decided based on the optimized contours. A constructed 3D facial model using these points can be used to generate its facial expression or change its view. The proposed system is confirmed to be very effective and robust when dealing with images of faces with complex backgrounds.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call