Abstract

We aim at modeling the appearance of the lower face region to assist visual feature extraction for audio-visual speech processing applications. In this paper, we present a neural network based statistical appearance model of the lips which classifies pixels as belonging to the lips, skin, or inner mouth classes. This model requires labeled examples to be trained, and we propose to label images automatically by employing a lip-shape model and a red-hue energy function. To improve the performance of lip-tracking, we propose to use blue marked-up image sequences of the same subject uttering the identical sentences as natural nonmarked-up ones. The easily extracted lip shapes from blue images are then mapped to the natural ones using acoustic information. The lip-shape estimates obtained simplify lip-tracking on the natural images, as they reduce the parameter space dimensionality in the red-hue energy minimization, thus yielding better contour shape and location estimates. We applied the proposed method to a small audio-visual database of three subjects, achieving errors in pixel classification around 6%, compared to 3% for hand-placed contours and 20% for filtered red-hue.

Highlights

  • Today, automatic speech recognition (ASR) works well for several applications, but performance depends highly on the specificity of the task, and on the type and level of surrounding noise

  • Using visual information in unconstrained conditions requires having accurate visual feature extraction, regardless of the visual features used: (i) pixel-based features: images are fed directly into a speech recognition system [4, 5, 8, 13], after applying a few transformations or normalizations to the images (fixed-size region of interest (ROI) cropping, histogram normalization, for example); (ii) model-based features: a model is located on images, and parameters to be used for ASR are deduced form the location and shape of the model

  • We present here an multiple layer perceptrons (MLPs)-based statistical appearance model of the lips which classifies pixels as belonging to the lips, skin, or inner mouth classes

Read more

Summary

INTRODUCTION

Automatic speech recognition (ASR) works well for several applications, but performance depends highly on the specificity of the task, and on the type and level of surrounding noise. We present here an MLP-based statistical appearance model of the lips which classifies pixels as belonging to the lips, skin, or inner mouth classes Such an ANN requires labeled examples to be trained and these may only be found on natural images. The obtained lip-shape estimates simplify lip-tracking on the natural images, as they reduce the parameter space dimensionality in the red-hue energy minimization, yielding better contour shape and location estimates. Such lip contours can be used to automatically label image blocks as belonging to one of the three classes of interest.

LIP APPEARANCE MODELING
Literature approaches
Statistical modeling of lip appearance
Training of the lip appearance model
LIP SHAPE MODELING
Lip-contour extraction in “blue” images
Shape model building
Shape model evaluation
LIP CONTOUR LOCATION ON NATURAL IMAGES
Joint lip-shape and location estimation
Cascade lip shape and location estimation using acoustic information
Use of acoustic information for lip shape estimation
Lip contour location estimation
The audio-visual database
The evaluation paradigm
Lip contour evaluation
Appearance model evaluation
Experimental results
Lip-shape estimation
Quality of location
Appearance model accuracy
SUMMARY AND DISCUSSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call